the Future Of Text

Edited by Frode Alexander Hegland

First Published 2020. Second Printing with minor corrections 2021.

All articles are © Copyright of their respective authors.

This collected work is © Copyright ‘Future Text Publishing’ and Frode Alexander Hegland.

The PDF edition of this work is made available at no cost and the printed book is available from ‘Future Text Publishing’ ( a trading name of ‘The Augmented Text Company LTD, UK

This work is freely available digitally, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of these articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself. The only constraint on reproduction and distribution, and the only role for copyright in this domain, should be to give authors control over the integrity of their work and the right to be properly acknowledged and cited.

Typeset in Adobe Caslon Pro and Shinglewoode by Timothy Donaldson (except for Kindle editions).

ISBN: 9798556866782


future text publishing


If you are reading this book in the Augmented Text Tool ‘Reader’ on macOS, which was developed to demonstrate some of the editor’s ideas about interactive text, you can interact with the text in experimental ways:

Author, Reader and Liquid are available from:


I am grateful for the encouragement and support of my friend and mentor Doug Engelbart.

I would further like to thank Vint Cerf for supporting the Symposium over the years and to Ted Nelson for continuously opening my mind to how wonderful the Future Of Text can be.

I thank my perfect wife Emily Maki Ballard Hegland (and perfect mother to our son Edgar) for her support, encouragement and patience over the years and Edgar himself, who never ceases to make me smile.

I would also like to thank my parents Turid and Ole Hegland, and my brother Henning.

Sarah Walton helped me to formulate my perspective of ‘liquid information’ and Howard Rheingold helped me understand the roots of our ‘Silicon Valley’ world.

Jacob Hazelgrove realized my dream of building a word processor (Author) and reader application (Reader). Roman Solodovnikov and his team built Liquid in many versions over the years and most recently re-invigorated LiSA.

The academic community at WAIS at the University of Southampton, including my PhD advisors Dame Wendy Hall and Les Carr, as well as David Millard, Chris Gutteridge, Mark Anderson (particularly for hypertext history) and many others, has been a fertile thinking ground for my personal understanding of what text is and, in particular, what academic text can be.

I am further grateful for my friends for their many fantastic perspectives who have helped me see things just a bit differently over the years, including (in a very random order) Harsha DeSilva, Paul Presley, Houria Iderkou, Livia Polanyi, Janine Earl, Stephen Fry, Bjørn Borud, Aspasia Dellaporta, Joe Corneli, David Price, Tom Standage and Bruce Horn, my first computer ‘hero’ who became a friend. Jane, Gian and Termini has also provided me with a heaven of a workspace outside and Paul and Tim inside–thank you, environments make a difference, whether physical or digital, as the genius of environment design Therese always reminded me.

I thank Valentina Moressa for helping communicate and promote the book to a wider audience than I could ever have managed on my own, with valuable assistance from Julia Wright, Keith Martin for help with design/Adobe InDesign and Tim Donaldson for font choices, including his personal designed ‘Shinglewoode’ which is used for blockquotes.

Finally, and most of all, a huge thank you to all the contributors who truly ‘made’ this book!

Dedicated to my son Edgar and my father Ole.

This book is a ‘Future Text Initiative’:

Frode Alexander Hegland





Vinton G. Cerf Internet Co-Inventor & Pioneer


Frode Alexander Hegland Developer of Author, Reader & Liquid, and host of the Future Of Text Symposium


Adam Cheyer Co-Founder of Siri and Viv Labs

Adam Kampff Neuroscientist at the Sainsbury-Wellcome Centre and Founder of Voight-Kampff

Alan Kay One of the earliest Pioneers of object-oriented programming, personal computing, graphical user interfaces (inventor of the overlapping window and icon “Parc GUI”), and computing for children

Alessio Antonini Research Associate at Knowledge Media Institute, The Open University

Alex Holcombe School of Psychology, Faculty of Science, The University of Sydney

Amaranth Borsuk Poet, Scholar, and Book Artist, Associate Professor, University of Washington, Bothell, and Author of The Book

Amira Hanafi Writer and Artist

Amos Paul Kennedy Jr Artist Printer

Anastasia Salter Associate Professor, University of Central Florida. Author of Jane Jensen. Co-author of A Portrait of the Auteur as Fanboy, Adventure Games

Andy Matuschak & Michael Nielsen Software Engineer, Designer and Researcher. Quantum computing and modern open science movement Pioneer

Ann Bessemans & María Pérez Mena Professor and post doctoral researcher at PXL-MAD School of Arts / Hasselt University, research group READSEARCH. Typography and type design teachers BA graphic design and MA Reading Type and Typography

Andries (Andy) van Dam Co-founder of Brown University’s CS department and its first chairman, Brown’s first VP of Research, Co-designer of Hypertext Editing System and many hypermedia systems, Co-author of Fundamentals of Interactive Computer Graphics and of Computer Graphics: Principles and Practice

Anne-Laure Le Cunff Founder of Ness Labs

Anthon P. Botha Director of TechnoScene Ltd

Azlen Elza Designer, researcher and software experimentalist

Barbara Beeton Editor of TUGboat (TeX Users Group), retired from the American Mathematical Society

Belinda Barnet Swinburne University, author of Memory Machines: The Evolution of Hypertext

Ben Shneiderman Professor, author and Human Computer Interaction pioneer, University of Maryland

Bernard Vatant Former consultant at Mondeca and Linked Data Evangelist

Bob Frankston Co-creator of VisiCalc, the first spreadsheet for personal computers

Bob Horn Senior Researcher, Human Science and Technology Advanced Research Institute (H-STAR) Stanford University and Fellow, World Academy of Art and Science

Bob Stein Founder of Criterion, Voyager and the Institute for the Future of the Book

Catherine C. Marshall Texas A&M University and Hypertext developer

Charles Bernstein Poet, essayist, editor, and literary Scholar

Chris Gebhardt Software engineer and researcher, The InfoCentral Project

Chris Messina Hashtag inventor, product designer, technologist

Christian Bök Associate Professor, Charles Darwin University

Christopher Gutteridge University of Southampton and Developer of academic repositories

Claus Atzenbeck Hof University & General Co-Chair of the 2019 ACM Conference on Hypertext & Social Media

Daniel M. Russell Senior Research Scientist for Search Quality and User Happiness at Google

Danila Medvedev Leading Russian futurologist and architect of NeyroKod

Danny Snelson Assistant Professor of English, UCLA, Editor, Archivist and Author of EXE TXT

Daveed Benjamin CEO Author of first-of-its-kind augmented reality book, Pacha’s Pajamas: A Story Written By Nature that features Mos Def, Talib Kweli, and Cheech Marin

Dave King Founder of Exaptive Inc.

Dave Winer On the net since mid-70s. Started two Silicon Valley companies. Wrote for Wired. Fellow at Harvard and NYU. Founder of podcasting, blogging, RSS and Open Web.

David De Roure Professor of e-Research, Oxford e-Research Centre

David M. Durant Associate Professor/Federal Documents and Social Sciences Librarian, and author of Reading in a Digital Age

David Jablonowski Artist

David Johnson Co-author of Law and Borders, The Rise of Law in Cyberspace

David G. Lebow CEO & Chief Learning Scientist at HyLighter LLC

David Millard University of Southampton

David Owen Norris Head of Classical Performance, Professor of Music University of Southampton

David Price DebateGraph Co-Founder

David Weinberger Author of Everyday Chaos, Too Big To Know, Everything is Miscellaneous, Small Pieces Loosely Joined and Co-Author of The Cluetrain Manifesto

Dene Grigar Professor and Director, The Creative Media & Digital Culture Program; the Electronic Literature Lab; Washington State University Vancouver

Denise Schmandt-Besserat Professor Emerita of Art and Middle Eastern Studies at the University of Texas at Austin and author of How Writing Came About

Derek Beaulieu Director, Literary Arts, Banff Centre for Arts and Creativity

Doc Searls Editor-in-Chief of Linux Journal, Author of The Intention Economy & Co-Author of The Cluetrain Manifesto

Don Norman Engineer, Scientist, Designer, and Author of The Design of Everyday Things and Things That Make Us Smart

Douglas Crockford JSON Inventor

Duke Crawford visual vocal actionable text

Ed Leahy Former advertising creative on Madison Avenue, Professor of Advertising, Syracuse University

Elaine Treharne Stanford University Text Technologies

Élika Ortega Department of Spanish and Portuguese, University of Colorado Boulder

Esther Dyson Executive founder, Way to Wellville

Esther Wojcicki CEO, Founder of Palo Alto High Journalism, Creative Commons Advisory Council and Author of How to Raise Successful People

Ewan Clayton Calligrapher, Teacher and Author

Fiona Ross Typographic consultant and Professor in Type Design, University of Reading

Fred Benenson & Tyler Shoemaker Data enthusiast, Artist, and Creator of Emoji Dick. PhD Student, University of California, Santa Barbara

Frode Alexander Hegland Developer of Author, Reader and Liquid, and Host of the Future Of Text Symposium

Galfromdownunder aka Lynette Chiang, Author of The Handsomest Man in Cuba

Garrett Stewart James O. Freedman Professor of Letters, University of Iowa, and Author of The Look of Reading: Book, Painting, Text

Günter Khyo Received his Master's degree in Software Engineering from the Vienna University of Technology. He was a research and teaching associate at the University of Applied Sciences in Wiener Neustadt

Gyuri Lajos TrailMarks Founder

Harold Thimbleby See Change Digital Health Fellow at Swansea University and Author of Press On

Howard Oakley Mac Developer and Technical Writer

Howard Rheingold Educator and author of Net Smart & Tools for Thought: The History and Future of Mind-expanding Technology

Ian Cooke Contemporary British Collections, The British Library

Iian Neill Johannes Gutenberg-Universität Mainz

Jack Park TopicQuests Foundation, co-founder

Jakob Voß Information Scientist and Software Developer

James Baker Senior Lecturer in Digital History and Archives at the University of Sussex

James Blustein Associate Prof. of Computer Science & Information Management at Dalhousie University, Canada

James O’Sullivan Lecturer in digital humanities at University College Cork. Author of Towards a Digital Poetics.

Jane Yellowlees Douglas Author of pioneering hypertext fiction, including I Have Said Nothing, one of the first published works of hypertext fiction.

Jay David Bolter Wesley Chair in New Media, Georgia Tech and author of The Digital Plenitude: The Decline of Elite Culture and the Rise of New Media

Jeremy Helm Communication Advocate, Systems Engineer and California Bay Area Organizer of

Jesse Grosjean Developer of WriteRoom and TaskPaper at HogBay Software

Jessica Rubart Professor of business information systems, OWL University of Applied Sciences and Arts

Joe Corneli Researcher working at the intersection of collective and artificial intelligence

Joel Swanson Artist and Assistant Professor, ATLAS Institute, University of Colorado, Boulder

Johanna Drucker Breslauer Professor of Bibliographical Studies, UCLA and Author of The Visible Word & Diagrammatic Writing

Johannah Rodgers Independent Artist, Scholar, Author of Technology: A Reader for Writers, former Associate Professor and Founding Director First Year Writing @ City Tech, The City University of New York

John Armstrong Writer & performance artist

John Cayley Poet, Programmatologist, Professor of Literary Arts at Brown University and author of Grammalepsy

John-Paul Davidson Producer, director and author of Planet Word

Joris J. van Zundert Senior Researcher and Developer in humanities computing, department of literary studies at the Huygens Institute for the History of The Netherlands

Judy Malloy Electronic Literature pioneer; Lecturer, School of the Art Institute of Chicago

Kari Kraus & Matthew Kirschenbaum Co-Directors, BookLab, University of Maryland

Katie Baynes Aspiring Author

Keith Houston Author of The Book and Shady Characters

Keith Martin Senior Lecturer, London College of Communication, writer and inventor of live word count

Kenny Hemphill Technology Journalist and Copy Editor

Ken Perlin Professor of Computer Science, New York University and Director, NYU Future Reality Lab

Leigh Nash Publisher at Invisible Publishing

Leslie Carr Professor of Web Science, University of Southampton, University of Southampton

Lesia Tkacz PhD Researcher at University of Southampton

Leslie Lamport Inventor of LaTeX

Livia Polanyi Linguistic Discourse Theorist, NLP Researcher and Poet

Lori Emerson Associate Professor and Director of the Media Archaeology Lab, University of Colorado Boulder

Luc Beaudoin & Daniel Jomphe Adjunct Professor of Cognitive Science & Education at Simon Fraser University, co-founder of CogSci Apps, owner operator of CogZest. Cognitive productivity enthusiast

Manuela González Handwriting for Branding Expert

Marc-Antoine Parent Developer of IdeaLoom and HyperKnowledge

Marc Canter CEO of and co-founder of Macromedia

Mark Anderson University of Southampton, PhD graduate

Mark Baker Author of Every Page is Page One and Structured Writing: Rhetoric and Process

Mark Bernstein Eastgate Systems, developer of hypertext software Tinderbox and Storyspace

Martin Kemp Emeritus Professor of the History of Art, Trinity College, Author of several books on Leonardo Da Vinci as well as Visualizations

Martin Tiefenthaler Book and Graphic Designer, Teacher of Typography and Semiotics at ›die Graphische‹, Vienna/Austria; Co-Founder of tga (typographic society austria)

Maryanne Wolf Director, UCLA Center for Dyslexia, Diverse Learners, and Social Justice, Visiting Professor and author of Reader, Come Home: The Reading Brain in a Digital World and Proust and the Squid

Matt Mullenweg Co-founder of WordPress

Michael Joyce Hyperfiction pioneer, Theorist, and author of Remedia: A Picaresque

Michele Canzi Strategist turned Tech operator. Essayist and Angel investor

Michael Nutley Writer and Editor specialising in online media and marketing

Mike Zender Editor, Visible Language. Professor, Myron E. Ullman Jr. School of Design, University of Cincinnati

Naomi S. Baron American University, Author of ‘Words Onscreen: The Fate of Reading in a Digital World’

Nasser Hussain Senior Lecturer, Leeds Beckett University

Neil Jefferies Head of Innovation, Bodleian Digital Libraries, University of Oxford and Director, Data Futures CLG

Niels Ole Finnemann Professor Emeritus. Department for Communication, University of Copenhagen

Nick Montfort Poet, Professor of Digital Media at MIT and Author of The Future

Panda Mery Productive Irritant

Patrick Lichty Assistant Professor, Animation/Multimedia Design, College of Arts and Creative Enterprises, Zayed University, Abu Dhabi

Paul Smart Philosopher and Author of Blade Runner 2049: A Philosophical Exploration

Peter Cho Head of Design at Pocket, Type Designer, Creative Coder

Peter Flynn Principal Consultant at Silmaril Consultants and former Head of Research and Academic Computing Support at UCC

Peter Jensen & Melissa Morozzo Moleskine Chief Brand & Innovation Officer and Melissa Morozzo

Peter J. Wasilko New York State licensed Attorney, Independent Scholar, and Programmer

Phil Gooch Founder & CEO Scholarcy

Pip Willcox Head of Research, The National Archives, UK

Rafael Nepô Information Architect and Founder of

Raine Revere Psychotherapist, Software Engineer, and Sensemaking Prosthesis Researcher. Founder of em.

Richard A. Carter Artist and Lecturer in Digital Media, University of Roehampton

Richard Price Head, Contemporary British Collections, The British Library

Richard Saul Wurman Author of 90 books including Information Architects, Information Anxiety, Co-Author of Information Design and Creator of the TED conference

Rollo Carpenter Artificial Intelligence Developer of Cleverbot

Sage Jenson & Kit Kuksenok Artist focused on speculative biology and emotive simulation. Natural language processing Software Developer.

Sam Brooker Assistant Professor, Richmond University UK

Sarah Walton Author and Digital Consultant

Scott Rettberg American Digital Artist and Scholar of electronic literature based in Norway, co-founder of the Electronic Literature Organization

Shane Gibson Technologist and Political Scientist

Shuo Yang User experience designer at Google and Author of Google data visualization design guidance

Simon Buckingham Shum Director & Professor of Learning Informatics, Connected Intelligence Centre, University of Technology Sydney

Sofie Beier Associate Professor, Head of Centre for Visibility Design, The Royal Danish Academy of Fine Arts (KADK)

Sonja Knecht Copywriter, Design Writer, Lecturer in Verbal/Visual Communications at Berlin University of the Arts and other institutions

Stephan Kreutzer Hypertext Systems Implementer

Stephanie Strickland Poet in print and digital media, author of How the Universe Is Made

Stephen Lekson Curator of Archaeology, Jubilado, University of Colorado Museum of Natural History

Stevan Harnad Editor, Animal Sentience, Professor of Psychology, Université du Québec à Montréal, Adjunct Professor of Cognitive Science, McGill University and Emeritus Professor of Cognitive Science, University of Southampton

Steve Newcomb Consultant

Stuart Moulthrop Distinguished Professor of English at the University of Wisconsin–Milwaukee and Founding Board Member of the Electronic Literature Organization. Author of Victory Garden, Reagan Library and Hegirascope

Ted Nelson Theodor Holm Nelson Visionary, Interactive Media Pioneer and coiner of the term ‘hypertext’

Teodora Petkova PhD student at Sofia University and author of Brave New Text

Tiago Forte Productivity Expert and Creator of Building a Second Brain

Timothy Donaldson Author of Shapes for Sounds, Typographer, Typeface Designer, Letterworker and hopeful Anthropogogic Dust Disturber at Falmouth University

Tim Ingold Professor Emeritus of Social Anthropology at the University of Aberdeen and Author of The Perception of the Environment

Timur Schukin & Irina Antonova Partners, NakedMinds Lab, Russia

Todd A. Carpenter Executive Director of the National Information Standards Organization

Tom Butler-Bowdon Author of 50 Self-Help Classics

Tom Standage Deputy Editor of The Economist and Author of Writing on the Wall

Tor Nørretranders Author of The User Illusion and coiner of the term ‘exformation’

Valentina Moressa Communications & PR for The Future of Text book

Ward Cunningham Inventor of the Wiki

Dame Wendy Hall Regius Professor of Computer Science, University of Southampton

Zuzana Husárová Poetess, Electronic Literature Researcher, Assistant Professor of Literary Studies at the Comenius University in Bratislava

Niko A. Grupen Student Competition Winner


Ismail Serageldin Founder & Director Emeritus, Library of Alexandria


Frode Alexander Hegland Deep history of interaction leading to text


Frode Alexander Hegland Timeline of text related moments and innovations

V I S U A L - M E T A

Frode Alexander Hegland Introduction to Visual-Meta



Frode Alexander Hegland

This book was started in 2019 when the climate emergency really became apparent and Fake News became a major social issue. The book was completed a year later, in 2020, under lockdown because the coronavirus (COVID-19) attacked all of humanity all at once.

If this is not the time to look at how we go about the business of being humans and how we, and uniquely us, use symbols to think and communicate then I don’t know when would be.

According to new research, published in New Scientist[1], “Some 320,000 years ago, a technological revolution swept across Africa. The large, flat, leaf-shaped hand axes that had remained largely unchanged for 700,000 years suddenly gave way to a more sophisticated toolkit of smaller, finer points and projectiles.” “Making such a core requires a high level of abstract thought and planning, and so is regarded as a product of modern minds.” The reason for the change was that our environment changed, producing smaller animals which were harder to kill. We had to adapt or die. The result was that we adapted and thrived. We all know the story, but we don’t know the ending, the story continues with us.

Today we are seeing our physical environment changing again, both through the largest scale of the world’s climate and through the attack of the smallest living thing, the virus.

What is different from all the experiences humanity has gone through however, is the nature of our communication environment[2]: Digital communication has different characteristics than analog communication environments. Some of which are obvious such as the speed of transmission and the limitless reproducibility at virtually no cost. Some are beginning to be seen, including the potential of digital links and connections. Others are still beyond our imagination.

If we don’t inspire and feed our imagination and look around for different and better ways to interact with the digital text which now runs through our lives at lighting speed, we will be as lost as the bee stuck on the inside of the window, forever flapping in place, ignoring the window open just outside its immediate view. It is the hope that the thoughts presented in this book will inspire just such a leap in how we learn through text, how we examine the postulates presented in text and how we take ownership of how we communicate via electronic text.

We know so precious little about the potential of text.

We know the dangers, including of what we call Fake News, but our species response has been to ask people to ‘be nice’, which is on the same level of usefulness as saying ‘let there be peace’, and to trust large corporations, whose revenue relies on continued use of their systems and engagement with arguments, rather than with any measurable notion of reaching understanding or consensus.

Our great ancestors adapted when faced with new landscapes. For them it was adapt or die. We have entered cyberspace–an entirely new information landscape. It is no different for us. The cost of investing in the future of high, but the cost of not investing is higher, in terms of opportunities for understanding lost and fake news spread.


  1. Lawton, G., [2020]. Human evolution: The astounding new story of the origin of our species. Available from evolution-the-astounding-new-story-of-the-origin-of-our-species/. [Accessed 17 04 2020]
  2. Pérez, J. M. and Varis T,. [2010]. Media Literacy and new Humanism. UNESCO Institute for Information Technologies in Education. ISBN 978-5-905175-05-3

“The thing that amazed me–even humbled me–about the digital computer when I first encountered it over fifty years ago–was that, in the computer, I saw that we have a tool that does not just move earth or bend steel, but we have a tool that actually can manipulate symbols and, even more importantly, portray symbols in new ways, so that we can interact with them and learn. We have a tool that radically extends our capabilities in the very area that makes us most human, and most powerful.

There is a native American myth about the coyote, a native dog of the American prairies–how the coyote incurred the wrath of the gods by bringing fire down from heaven for the use of mankind, making man more powerful than the gods ever intended. My sense is that computer science has brought us a gift of even greater power, the ability to amplify and extend our ability to manipulate symbols.…

We need to become better at being humans. Learning to use symbols and knowledge in new ways, across groups, across cultures, is a powerful, valuable, and very human goal. And it is also one that is obtainable, if we only begin to open our minds to full, complete use of computers to augment our most human of capabilities.”

Douglas C. Engelbart

Improving our Ability to Improve: A call for investment in a new future, 2002, Keynote Address, World Library Summit, Singapore


Vinton G. Cerf

Vint Cerf

Language must be one of the most powerful consequences of evolution. At least, one imagines this to be so, considering that non-human species do not appear to have developed nearly the cognitive capacity for speech that humans have. This is not to say that non-human species are incapable of relaying information by audible means. There is too much evidence of cognitive capacity in non-human species to discount their ability to communicate entirely. There is even documentary evidence that great apes, chimpanzees and bonobos have some capacity to use symbolic means of communication (signing, touch boards). Moreover, it is also clear that bird species, dogs, dolphins, elephants among others have the capacity to understand, if not to generate symbolic expressions.

Writing, on the other hand, appears to be a purely human invention along with complex language. What makes writing so intriguing and vital is that it has a potential permanence that ephemeral speech does not exhibit. While oral tradition is as old as history itself, the written record is what preserves knowledge, insight, and the story of humanity. In an age where streaming video and audio as well as video teleconferencing has become commonplace for up to half the world’s population, one might reasonably ask what the Future Of Text is to be.

I consider text to be an indispensable part of our societies, no matter where situated on our planet. The potential permanence of the written record is vital to our recall of the past and to our ability to communicate with the future. It is fair to say that static images can also be preserved and in some cases for thousands of years as the cave paintings of Lascaux show us. Preservation of animated images (e.g. film) and sound are less certain given the technologies needed to render them.

As I write these words, I wonder what a reader in the 22nd Century might make of them. Of course, that presumes that the written record will be preserved for the next 100 years. This is not assured. While it is not my purpose to focus solely on digital preservation in this essay, I want to emphasize how uncertain it is that digital records will be preserved over decades or longer. Part of the problem is the software needed to render digital records (think text processing software, spreadsheets, digital audio and video recordings, still image recordings). There is no absolute assurance that the software used to create digital objects will still be working 100 years from now or that the digital objects produced will still be processible using the software of the future. There is much still to be done to assure the useful preservation of digital content.

In this essay, I want to focus on a different aspect of digital text. Far from the relatively linear medium of written or printed text, digital text objects have substantial additional flexibility. One can recall the oN-Line System (NLS) developed by Douglas Engelbart in the 1960s at SRI International in his Augmentation Research Center. His thesis, shared with J. C. R. Licklider of MIT, DARPA and Bolt Beranek and Newman, was that computers could be used to augment human intelligence. We see evidence of this daily as we engage in voiced interaction with systems trained using machine learning. These systems “speak” 100 languages, “understand” well enough to take oral commands and requests and respond usefully to them. We hear “Alexa, set a timer for an hour” and “OK Google, what is the weather in San Francisco” on a regular basis and many no longer marvel at the technology needed to achieve these seemingly human-like interactions.

Text, rendered by interactive software and can be far more than a simple, linear medium. Digital objects can have structure, can be manipulated to present content in a variety of ways. Engelbart’s NLS had many such tools. One command allowed only the first line of every paragraph in the document to be displayed. If the writer were careful, such a presentation should be a powerful way to present what the document is about. The writer is urged (though not coerced) into writing with such clarity that the thread of the document’s argument would be understandable from the first lines of each paragraph. Gross manipulations of the text corpus were possible using the structure of the document as a tool for referring to various aggregations of text. The focus of attention is not on formatting and presentation but rather on clarity.

The ability to mine vast quantities of text is another manifestation of text’s future. We will be able to distill the gist of large amounts of content cognitively using various aggregation and summarization tools. We may automate discovery by analyzing large tranches of text. We may be able to juxtapose seemingly disparate ideas, leading to unexpected insights. At Google, and perhaps other search companies, semantics is playing an increasingly important role in the analysis of text. So-called knowledge graphs provide associations among various terms that can be used to expand searches, for example, to include terms that might not appear literally in the search text. Homonyms and synonyms can be applied to increase the scope of the search - perhaps at the expense of precision but to the advantage of recall. Machine learning tools permit translation from one language to another allowing even broader searches to be undertaken and the resulting responses potentially translated into the searcher’s preferred language. These tools may not be comparable to human-level translation but it has been surprisingly effective, short of perfection.

This book provides a mass of evidence that text in all its present and future forms is here to stay and should be strongly protected and preserved for the benefit of current and future generations.


Dear Reader of The Distant Future

I am glad this book has found its way to you and I wonder with amazement through which means you ‘read’ it. Are you reading the paper version or did the digital version manage to stay interactive in all this time, or are you holding the metal version in your hand or did you have it scanned somehow?

However you ‘hold this text’, in your hands or in some sort of a display, what you hold are our sincere thoughts and dreams of the Future Of Text.

(It is hard to define what text is, but for this purpose I mean symbolic grammatical communication, as opposed to discrete symbol pictures, either on its own or multiplying the effectiveness of other still or moving images and vice versa).

At the time of writing this book, we are living at the dawn of the digital age. I myself was born a few months before Doug Engelbart’s famous demo in 1968 (I hope you are still aware of it) and my generation has seen the transition from paper-only books to green CRT screens to rough black and white, and then colour and finally high-resolution screens, as high-resolution as early paper printing.

Many of us are excited by the coming full-view headset or projections of what we call ‘Virtual Reality’ and, when mixed with what our eyes see in front of us naturally, ‘Augmented Reality’. Others are excited by the promise of computing power so far beyond the early days that we think of it almost like magic and call it ‘Artificial Intelligence’ with a currently successful variant; ‘Machine Learning’. We are also writing this at an age when speaking to computers and listening to their synthetic speech has become possible and routine for many tasks, such as asking for basic information, initiating a voice or video call and so on.

We have also developed powerful technologies for authoring images and videos, though the amount of people who are inclined and able to produce useful communications of these media remain much smaller than the amount who can record still images, video and even 360 video or 3D video on their ‘phones’ or other relatively cheap consumer devices, including using drones. It is easy to record hours of video. It is hard to make a longer video watchable. It is easy to post a picture of family or friends on social media, but few integrate it into a narrative beyond pictures from a specific event.

The same goes for textual authoring. It is easy to dictate or type in volume and most people today do, in the form of short ‘text’ messages, social media posts or longer pieces for study, work or leisure. It is much harder to make this mass of text truly accessible–we still write mostly in columns, import illustrations from elsewhere and have severe restrictions for how we can connect or link information from different locations or sources and hardly any opportunity to address specific sections of text. We then leave little information in our primitive digital documents, which are currently in the PDF document format, when we publish the result of our work.

We know we need to build more powerful ways to author for our intents to come across clearly and to augment the readers ability to grasp those intents, while at the same time being able to question it, but in this generation this is seen mostly as a commercial problem and many older people think purely of the marvel of how far computer systems have come and lack the imagination for how much further it can go.

I can easily picture you looking back at this age with a disbelief at our primitive interactions. But remember, we are but young in this, we are the first digital natives and we hope our distant descendants will have matured far beyond where we are in terms of applying imagination and resources to better develop the means to record knowledge and carry our dialogue. When we look at the state of the world today it is hard to imagine getting to such a state, admittedly, with such a polluted atmosphere in political and climate terms, but we hope our species will survive, thrive and grow, all the while investing in how we think and communicate. When we look at the state of our distant ancestors, who first started using stone tools, we are puzzled how they did not improve on the first design for over a million years and hope that you will think of us fondly, if a little puzzled by our lack of imagination and investment too.

My name is Frode Alexander Hegland, my wife is Emily Maki Ballard Hegland and our beautiful baby boy is Edgar Kazu Ballard Hegland, all of us now lost in the mist of time, but all of us grateful to have been on this journey and hopefully contributed a little something to the world you live in today.

Dear Reader of Today

The letter to the distant future is as relevant to you as it is to our deep descendants: How can we improve the way we record our knowledge and carry out our dialogue?

Will we abdicate our responsibility and cross our fingers that AI will do the ‘thinking’ for us, and that somehow bigger displays, whether in 2D or VR will allow us to ‘see’ deeper than what we can represent in a high-resolution frame? Do we expect that those who sell software have augmentation as their core driving force or that their focus is to run a business? Are we expecting academia to be able to take the time out from academic pressures to drive the development of what I would call richly interactive text? No, this comes down to you, dear reader. The responsibility for how we invest in, and develop our communications is all of our responsibility.

We need to ask the questions of how we want to live with our own thoughts and those of our fellows today and, for inspiration’s sake, how we want to be viewed in the distant future. And then we need to act.

Can we invest in this to the point where we will need to publish a sequel to this work, explaining how far we have come in such a short time? That would be an achievement for the ages.

Goodbye Gutenberg, Greetings Global Brain.

Frode Hegland

Wimbledon, UK



The full title to the book was originally ‘The Future Of Text, a 2020 Vision’ but the pandemic, political turmoil and climate crisis of this year has changed the meaning of 2020 from ‘perfect vision’ to one more of ‘clouded vision’. We still hope this year will be an inflection point for our species to grow up and look honestly at ourselves and our interactions with love, not to succumb to easy ignorance and hate. But only you, dear reader, will know what the year 2020 will stand for in the future.


I have invited some truly phenomenal people to contribute to this work, and against all odds, they said yes. There are eminent representatives from the worlds of art, typography, science, software development, academia and more. Text writes our history, and text guides our future, but text itself is not often reflected on, let alone written about. I am grateful for the rich contributions herein to the potentials of the Future Of Text to inspire further thought and dialogue. I have taken the opportunity to note down some aspects of the Future Of Text by way of introduction. First of all, isn't it obvious how important it is to look closely at what text is and how we interact with it?


The following may be obvious, but if so, why do we not invest more in authorship and readership tools, systems and education?

Both authorship and readership is thinking, it is not transcription or ‘downloads’ of simple bits, they are both process of clarification and connection. Beyond writing the most basic sentence, the act of writing changes how you see what you write.

In order to write something, you have to translate a series of connected thoughts and vague notions into a rigid, visible form which forces you to make decisions. If you want to build something, invent something, create something–or if you just have an idea or insight which you want to communicate clearly to others–the first step of making it real is making the decisions you have to make when you ‘commit’ to paper. This process allows you to freeze your thoughts and organise them and edit them as you see fit, giving you entirely new insights into what you first ventured to note down. Exactly how our tools allow us and constrain us in freezing, organising and editing shapes the space of thought.

This augmentation of how we can use visual text to support our thinking is wondrous. In a way, it is analogous to how computers now use the Graphics Processing Unit (GPU) to offload work from the Central Processing Unit (CPU) to give the CPU more ‘bandwidth’ for higher level processing. In human terms what we are doing is using our visual processing (occipital lobe) to offload work from our higher level conscious thinking (prefrontal cortex).

Thoughts are not whole and coherent.

For any serious intellectual or creative endeavour editing will be needed, that is no surprise. Editing with clay tablets, papyrus or paper presented different challenges and opportunities. So it goes for editing in a traditional word processor, an email application, a web design package or in a social media post. All of these authoring environments, large or small, provides for different types of writing and reflection. This is obvious. It is also obvious that we can go much, much further in producing thoughtful text environments.

It's also obvious that discourse via published/frozen documents affords us a different mind state than face to face discussion or even instant social-media chat. It is a space of time for reflection and refinement to increase clarity. While we experiment with ever-changeable documents and ever-flowing streams of discussions, let's be mindful of the value of making deliberate, thoughtful statements which can benefit from the same deliberate, thoughtful response.

Text is mind expanding. Let’s expand further.

Arming the Citizenry: Deeper Literacies

Vannevar Bush famously wrote in an article in The Atlantic Monthly titled ‘As We May Think’ in 1945 that “…publication has been extended far beyond our present ability to make real use of the record”.

The ease of reposting with a click or sending out an army of social media bots to sway opinion and change elections has made it clear that publication through easy one-click sharing has been extended–far beyond our present ability to make real sense of the record–the dialogue stream in which we live.

Text has been weaponised and in this environment we must arm the citizenry with the tools and techniques to gain a deeper literacy to fight not only misinformation, propaganda and the morass that is ‘fake news’, but also to gain deeper insights into all the text-encoded information which flows through our work and private lives. This includes how we digest news and perform academic and scientific research, and this will involve building of ever more powerful tools and educating the user how to best use them.

Hypertextual thinking showed great promise mid-twentieth century but became buried in ease-of-use clicks which brought us great entertainment. Hopefully the reality of ‘fake news’ will allow the potentials of advanced text interactions to be realised.


Ludwig Wittgenstein wrote in the preface to his Philosophical Investigations:

I have written down all these thoughts are as ‘remarks’, short paragraphs, of which there is sometimes a fairly long chain about the same subject, while I sometimes make a sudden change, jumping from one topic to another, – it was my intention at first to bring all this together in a book whose form I pictured differently at different times. But the essential thing was that the thoughts should proceed from one subject to another in a natural order and without breaks.

After several unsuccessful attempts to weld my results together into such a whole, I realised that I should never succeed. The best that I could write would never be more than philosophical remarks; my thoughts were soon crippled if I tried to force them on in any single direction against their natural inclination.– And this was, of course, connected with the very nature of the investigation. For this compels us to travel over a wide field of thought criss-cross in every direction.

The act of authorship is an act of taking fluid human thoughts and structuring them into linear arguments or narratives. The structure can be tight and coherent as in an academic argument or more arbitrary to suit the whims of the author, or in the case of this book, the editor. I have chosen to organize the articles written by the contributors simply alphabetically by first names (which seems more friendly and intimate). I have also built an experimental reading application called Reader (as introduced on the previous page), to give you freedom to jump around as inspiration and curiosity guides you–the digital version should give you greater freedom than you would otherwise be used to.

It should be up to you, dear reader, to choose how to unweave any and all of the text in this book and how you weave it into your own life and understanding. It should be up to you to follow your intuitions and curiosities and see it all from different perspectives and connect it all to the rest of the world in any way you choose.

After all, was this not what hypertext seemed to promise all those years ago of experimentation and wonder? Today we have settled into what has become familiar modes of interaction and I feel we need to take a step back, shake our heads a bit and look at how we deal with text differently and I have written a few sketches from my personal perspective relating to the dimensions of what we might look at:

Text is simply marvelous.

Text allows us to see, connect, transform, interact and freeze knowledge like no other medium. By allowing us to think outside of our brains, text allows us to see a bigger picture and focus on details without losing track of where we are in our short term memory. Text allows us to see different perspectives and grasp our knowledge in powerful ways.

Text gives us a fantastic ability to connect across space and time. Text derives its powerful richness to represent human thought not so much from the individual symbols but from how they interact-the interaction between the symbols and our interactions with them matters more than the individual.

Text allows us to freeze thought from the ever-shifting human mind as simple words or sophisticated statements use the power of compositional syntax. Digital text also allows us to unfreeze thought and to interact with it like it is a multidimensional sculpture in ways we cannot fully imagine until we actually use it. This is similar to how we cannot imagine a new sense, which is what interactive text is, until we have it and can try it and from there build further powerful interactions.

Text is transformative. As a teacher, when someone presents an exciting idea to me, I usually ask them to write it down, knowing that the act of committing the thought to paper is difficult, and hence they will only work at it if it’s truly a valuable idea to them. It takes the idea out of the soft mental presentation of thought and forces the author to really look at it to present it as a coherent thought.

Myriad interactions

Interactions should support reading digital text skimming for salience, deep reading for comprehension and evaluation, as well as support for following and evaluating inbound and outbound citations, implicit links and high resolution explicit links. Annotations which 'live' in their own dynamic environment, text for thought–and so much more.

Interactions should also support sitting under a tree and reading a paper book with beautiful typography and nothing to come between you and the physical paper pages of the document, be it book or paper or whatever else you would like to read.

Of course, in due course interactions will include all of those, at once.

Myriad Texts

Whatever text is and whatever text will be, text will never be just one thing any more than sound or pictures will be one thing. The plurality of contributors to this book reflects the plurality of what text is–all text of the same languages are based on the same alphabets, but the medium and use determines what the text is. As was the case with the difference between a manuscript (handwritten, typed or other), a publication or a book; even digital text for authoring, thinking and reading is very different. A few types of text to illustrate this come to mind, in no particular order:

Text Carved in the side of a tree, Burnt into a wooded product, Embossed on parchment and Tattooed onto skin. Email text, Academic Document text for students, teachers, examiners, journal editors and readers. Annotations, Notes and Text Tags. Text for Reading for Pleasure (where the enjoyment of page turning matters more than rigorous interrogation of assertions) which is again different from Serious Books (where the authority of the author matters to the reader and interactions for crucial comprehension is crucial and similar to reading academic documents but not the same). Also Business Memos, Memorandum of Understanding and Contracts. Computational text (text in a programming or scripting environment or in pages of 'normal' text made magical by the power of the computer), as well as Mathematical text. Intellectual texts, Poetic, Erotic, Inflammatory, Calming, Divisive and Inclusive. Text for Display (such as a logo or hieroglyphics written on a temple wall) and Handwritten text (where personality, perceived effort of production and style matters) which serve other purposes where the intentions and tools systems are different from other text. Government Documents, covering everything from mundane Laws and Traffic Citations to Foundational Texts such as a country's constitution. Pamphlets and Blogs to change your point of view, perhaps leading to fundamental change of foundational texts. Journal entries for private thoughts and Graffiti to shout loudly. Press Release text to shout more subtly. Propaganda falling from the sky to blanket a city in fear. Threatening Messages, Harassing texts, Promises Kept and Promises Abandoned. Notes under Hotel Doors, Crumpled Paper in wastepaper baskets. Aspirational Quotes and Aspirational To Do Lists as well as Last Wills & Testaments. Text in Graphs and text in Columns. Legal and Medical text where the document’s authenticity is paramount, even more important than interactions. Social Communication text such as Text Messages (SMS), Social Media and Emojis which are quick to produce and digest but which make their way into the ether not to be easily found again) inhabit yet another world, a world with Fake News text which is quick to forward but slow to consider ... and text for Sensemaking, text for dynamic Thinking (notes on cards, PostIts® or in a dynamic view on a computer screen to see relationships and connections on the page and external), hint at early computations first appearing in sand by our fore-bearers marking with sticks, today made magic by the power of digital systems.

Without meaning to appear to have attempted a complete list, I end with text for Warning (traffic etc.) which are almost the polar opposite of other text; it is text which is obtrusive and whose purpose is to disrupt.

Text to inform, text to inspire fear, text to make you think, text to make one wonder, text to inspire awe, text to warn. Text to make you conform, text to make you rebel...


The issue of addressing is central to what information is and to human thought and communication. If we cannot somehow point to something–address it–we cannot refer to it, put it in context, interact with it or even think about it.


An adult Homo Erectus indicates to his wife to look over by the trees–there is a threat hiding there.

My beautiful young son Edgar (who just turned 1), points to the button on the tumble drier–he wants to turn it on.


A 20th century student quotes from a well-known publication in her field and her teacher is well aware of this work and reads the quote with approval. Meanwhile, an academic has a brilliant insight from reading in adjacent fields and cites from an authority, but it is unreasonable for his teacher or peers to go to a library and track down the books and journals and thumb through them to the sections he cited. His shining new insight therefore goes largely ignored by his community who don’t trust his sources and find it unreasonable to check them.

A 21st century academic cites a passage from a history of modern politics but cannot refer to the page number since he is reading a commercially published electronic book or a PDF behind a paywall. It is not practical for the reader to track down the document (and pay for it) and the fact that it does not have internal addressing (no page number) results in his reader either blindly agreeing with the argument or simply discarding it–without having read the citation source. This scenario highlights the vital importance of being able to point while communicating and thinking. It also demonstrates the ‘dirty little secret’ of academia whereby citations are traditionally used to add weight to an argument within a known academic space and not as a way to connect to ideas and assertions further afield.


A blogger cites an anchor to a paragraph in another blog to allow the reader to instantly check the section in context, but the other blogger forgets to pay her domain fee and the domain name stops working. This little scenario plays out every day: Link rot caused by domains disappearing.

Interaction comes from being able to grasp or point, and this is made possible by addressing. It arises out of addressing: If Homo Erectus could not use his finger and point to the trees, he would not be able to indicate where the threat was, and my son could not communicate that he wanted to press the button if there was no way for him to indicate/address/point to the button.

Similarly, the citations brings forth something in the 20th century academic’s shared knowledge (the student cites what the teacher expects to be cited and therefore already knows), yet was not powerful enough to bring forth that which was not already known to the reader since going to the library to find a document and then thumbing through it to the right page was not worth the effort in the 20th century. This problem is replicated in digital form for the 21st century academic since links to inside digital books or academic documents (PDFs) are not usually practically available.

Analogue / Real World Addressing

In analog systems, pointing can be done by address (such as a house address), ‘implicitly’ (to use Doug Engelbart’s term, for example, a word is implicitly linked to its entry in a dictionary), explicitly (Planet Word, page 395), high resolution (The Neptune File, page 73, second paragraph), relatively (it’s the one to the left of the green one), temporally (turn left after 20 minutes) and by criteria (the small ones). Some addressing happens purely in the author’s and reader’s minds, such as the basic meaning of words, allusions and references to other literature or things or ideas in the world, what Tor Nørretranders calls ‘exformation.’

Digital Addressing

Limits of Links

Digital addressing has the great benefit of being able to employ hyperlinks or web-links which are addresses to specific locations of documents on computers, primarily server computers. These can be retrieved at great speed but they can also rot when the server goes online or the DNS (Domain Name System) goes offline for that computer. They can link at high resolution to the content of a document but only if the author has explicitly made this possible. Similarly, some digital books can be linked to at high resolution, but only if the publisher has made it possible.

Linking between social media posts is possible where the company allows it and at the time of writing, Twitter allows links, and Facebook allows embedding. Linking to email messages or messages, such as iMessage and SMS is not possible. Linking to specific cells in a spreadsheet is not possible. However, linking to a specific time in a (YouTube) video is easy and can be very powerful. Linking to a specific time in a video on a news or other website is not possible.

Citation Concerns

Linking between unpublished documents–or even between windows–is generally not possible. Citations are a step outside the hyperlink method which allow the reader to access documents through traditional ‘descriptive’ citation information to find the document; title, author, date and so on. Modern academic documents do also feature links to download locations through URLs to known locations of the document and DOIs. Whereas digital text links provide a function not possible in the analog world, digital text in books and documents have (at the time of writing), lost the degree of resolution of links which page numbers offered. An author cannot link to a specific page directly, only by referring to it in the text for the reader to manually move/scroll to the correct location/page. Perhaps most importantly, we can say that citations are a powerful protection against dishonest or lazy research, and against Fake News and propaganda–but only if the citations are conveniently checkable. Web links and paper-style citations in PDFs fall far short of this.

Connection Opportunities

The most basic developments of connecting text will be to make the connections for explicit links to the resolution required–either referring to a collection of work or a single sentence in a document. The connections should be able to be categorised (have types) where desired to inform the user (human or machine) the intention of the connection. The connections should be actionable in meaning, and they should be able to go to a location and retrieve data. The connections should be shareable like Vannevar Bush’s trails. It will also be increasingly important for the connections to be robust with what they link to having a good likelihood of being available when needed.

Additionally, implicit links should be quick and easy to follow as well as explicitly created links. And finally, connections need to be connected to bold views for the user to be able to grasp the links in the most powerful ways.


There are incredible possibilities for how we can interact with symbols to compare, contrast, analyse, share and ‘live with’ our information in a richly interactive environment. In order to make this possible, it is not enough for any one software vendor to add functions. We have to work together to come up with ideas for addressable spaces and test, employ and improve them. We need to pursue new and open ways to create, connect and share information through a truly open method that is not owned by specific interests. This includes rich and open document formats and means of referring to other documents and objects.

The evolution of text will happen along the same rules as any evolution. Text will evolve along the options available, pushed by the evolutionary pressures which have a bearing on text itself. In the case of text, this is the constraint of the market forces pushing the very limited expectations of users of text.

Doug Engelbart called for Directed Co-Evolution since we have evolved the intelligence to actively take part in the evolution of our knowledge environments and that is what this book is about. The goal of the book is to open the evolutionary space and to change the evolutionary pressures from being downward pressures of cost of production of tools and systemic issues to an upwards pressure focused around greater understanding among the general public and specialised workers alike. We can and must work to create more powerful interactive text environments than what we have in the first few decades of the 20th century.

To better see where we can go, we need to work to develop a conceptual framework for what text inherently is, can be and how it can solve our needs to think and communicate better. We cannot only think of text as subsets of other frameworks. This is something that will take a myriad of perspectives to do in deep dialogue, and something we can really use text to help us achieve. So, let’s not let the Perfect be the enemy of the Good. Let’s build what we can and engage in the dialogue that we can.

Thank you for having read through the introduction and thank you for being a part of this journey.


This book is a labour of love from all those involved. It is necessarily incomplete and we hope to address this increasingly over the years with future editions, continuously adding further points of view. Please feel free to get in touch with suggestions, just email the editor Frode Hegland:

In the pages beyond you will come across many types of text with many points of view behind them. There are technical texts and artistic texts, poetic texts and academic texts.

Some address interactions, hypertextuality and others address the letterforms and visual aspect of text. There are concerns of the archival aspect of text–keeping text alive for generations. Some discuss the semantic nature of text and how text can augment our minds, our communication and collaboration–and how Fake News, cheating and plain laziness can reduce our understanding of the text and the thoughts behind the text.

Importantly, even though penned by brilliant thinkers of diverse fields, these are personal reflections of fears and hopes, of limitations and opportunities.

What are your thoughts about the Future Of Text?

Maybe this page can serve as a place for you to note down your initial thoughts on the Future Of Text, to refer back to when you have dived in and absorbed the perspectives herein.

The liquid potential of the future is nice and warm, dive in!

You, dear reader

A space for your initial thoughts, should you wish to note them down, on paper or as annotations if you read this as a PDF:


Adam Cheyer

Interpretable And Executable Text

In the early to mid-1990’s, I was composing a document and needed to find and insert some information from another program. To do this, I had to leave my text editor, search for and launch another application, browse for the information I was looking for, format it appropriately for my source needs, and then switch back to where I was in my original text editor. All of this took minutes, and more importantly, took me out of the flow of what I was writing. There had to be a better way, I thought.

As a result, I devised what I called “The Invisible Interface”. The idea was to use the computer’s clipboard as an interpretable information bus for retrieving information I wanted without ever having to leave the application, or even the very text line, I was working on. As an example, I would type something like, “Please send the package to my sister’s address”. I would then select and copy to the clipboard the phrase “my sister’s address”, hit a function key to request processing, and I would instantly hear an audio cue: “bing” if successful, “bong” if the system was not able to resolve the phrase to more specific information. If I heard a “bing”, I would hit the paste command in my editor, and it would replace the selected text “my sister’s address” with “Sara Cheyer, 123 Main St. Town, State, 91101”. I connected a myriad of databases and information sources, and could instantly and painlessly retrieve all sorts of information by a concise, direct phrase inserted in-line in the context of what I was doing: “population of South America’s largest country”, “email for my boss”, “free time on Thursday”, “flower in French”, etc.

Over the next few decades, I continued to explore the efficacy of interpreting typed and spoken text as a means for accelerating and streamlining a user’s path to completing tasks. Each system I built became richer and more ambitious, adding more vocabulary, more capabilities, more conversational interaction, more personalization, and more modes of interaction across multiple modalities and devices.

One of the first major challenges with what seemed like such a simple idea came to light when we were working on Siri, the startup sold to Apple to build their ubiquitous voice assistant. We had a prototype that was able to understand requests across a reasonable number of domains. However, when we loaded a vocabulary set of about 25MM business names, all hell broke loose with the system -- I remember typing one of the most basic commands “start over”, which usually reset everything to a known state. In response, I received “Looking for businesses named “Over” in Start, Louisiana”. In that moment, I realized that just about every English word was either a business name or a place name, and that the number of combinatoric ambiguities were astronomical. Over time, we found methods to overcome this challenge, and the system was again able to understand and execute requests effectively (including utterances mentioning business names, the newest movie that just came out, odd musician or music group names containing symbols as part of their name, etc.).

Adding speech recognition to the typed text input brought additional challenges. When a movie called “9” came out, one of my spoken test cases sounded like “Get me 2 4 9 4 3 at 7 30 on 9 9 09” (get me two [tickets] for [the movie] “Nine” for three [people] at 7:30 on 9/9/2009). This worked, but then I nearly threw up my hands in defeat when shortly afterwards, a different movie came out called “Nine”...

As I worked with conversational assistants that could help you complete tasks more quickly through the power of executable commands, I came to understand that to realize their full potential, no one group or company could program in reasonable responses for everything someone might ask. The key to maximizing the value of this technology would be to grow an ecosystem such that every developer in every industry could create a conversational interface to their content and services, in a manner much like how they create websites and mobile applications today. A language-based interface could not only access information from individual “sites”, but could provide a unifying interface across content sources. I believed that if done in a scalable way, a conversational assistant interface could have even more impact to consumers and to industry than the previous interface paradigms of the web and mobile. I co-founded Viv Labs to pursue this vision.

There are several important challenges to bring this ambitious vision to fruition. The first is that to be able to execute the intent behind a textual utterance, someone needs to define appropriate semantics or representation for each operational action and data object. Our approach has been to allow the ecosystem to define that as they like. We try to encourage collaboration, standards, and sharing as a means to encourage interoperability, and we provide some basic models that developers can use if they like, but this is optional and we allow each developer to define their own models and mappings between natural language and these models.

A second challenge is what happens if multiple sources map the same language to different representations? Which service gets called, and what happens? Our solution was to partition the top level linguistic space into “natural language categories” that would organize providers with similar language into groups. When a user asked for a ride to the airport, multiple providers (eg. Lyft, Uber) would be presented, and the user can select which one (or ones) should be used by their assistant. The request would be routed to each service to interpret and execute the request as they saw fit.

Our most important challenge was how could different services interact with each other. We quickly realized that almost every request composed multiple services together. “Get me a flight to the Superbowl next weekend” could involve a flight service, an event service, a geo service, a date/time service, none developed by the same company. Since the number of ways different services can be combined through language is combinatoric, a human developer can’t efficiently enumerate them all. Our solution was to implement “Dynamic Program Generation” that would, on the fly, write code to integrate all relevant services available to a particular user, present results, interact as needed to get answers to questions, and then learn from those interactions to eliminate unnecessary questions in the future.

I still am striving for a day when every connected person and every connected industry will be significantly and positively impacted by a personalized assistant that can receive textual, spoken, and multimodal requests, and delegate them on the user’s behalf to their preferred services on the web, helping accomplish complex multi-step tasks seamlessly and efficiently. We will get there soon...


Adam Kampff

The Brain’s Past Will Define Text’s Future

A brief history of cortex

Human cortex evolved from the frontmost bulge of the early vertebrate brain. This “forebrain” was originally quite small, but it grew larger and larger throughout evolution. It eventually flopped back over the rest of the brain and curled onto itself to create the undulating folds that characterize the outer surface of the human brain. What did cortex do that was so useful?

The role of cortex in early vertebrates is subtle. In fish, it gets inputs from many sensory organs and sends outputs to many motor areas, but it is not directly responsible for controlling behaviour. Instead, the cortex modifies the hardwired reflexes generated by other parts of the brain. Our fish ancestors relied on these simple reflexes to catch food and escape predators. However, when every individual uses the same strategies, then the entire species is vulnerable to a sudden change in the environment that makes one of those strategies dangerous (e.g. the arrival of a new food source that is poisonous). In short, the early cortex was responsible for creating diversity in individual behaviour, and thus making the species as a whole more robust.

In reptiles and birds, the diversity created by cortex was connected to a powerful learning system. This system could encourage behaviours that resulted in something good and suppress behaviours that resulted in something bad. The diversity created by cortex could now undergo “reinforcement learning” to improve behaviour through experience. Generating random diversity in behaviour, and then reinforcing what works, is an effective strategy. However, evolution soon realized that if the diversity generated by cortex could be structured to better represent the world, then much more efficient learning is possible.

The ability of cortex to model the world, and use that model to learn new behaviours and solve new problems, is conspicuous in mammals. When a rat, who has already learned to complete a maze by making the correct sequence of left and right turns, is given a chance to climb out of the maze, then it will run directly to the cheese. This “shortcut” requires the rat to build a model of space, not just a memory of the sequence of correct choices. A model of space is created in the frontal tip of cortex called the hippocampus, and it provides the foundation for the more elaborate world models that cortex would create as it continued to expand.

Primates are unique. They use cortex’s mental model to not just represent the world; they have also given it direct control over their movement. The cortex of primates controls muscles by overriding the older brain systems that were used by earlier mammals, birds, reptiles, and fish. This allowed primates to negotiate much more complex environments (such as the tree canopy), where knowing that one branch will support your weight and another will break is required for survival. Thriving in these complex, dynamic environments drove further expansion of cortex, and two very important things happened next.

The origins of text

With cortex now controlling movements directly, observing the behaviour of a primate was akin to observing the action of cortex. When one monkey watched another monkey move, he was watching him think. This high-bandwidth exchange of information between two cortices was a necessary requirement for the development of language.

Our early human ancestors exploited this new ability to communicate to efficiently coordinate their actions and exceed the capabilities of one individual cortex. However, in order to completely coordinate with each other, the model in one person’s cortex has to be recreated in the other’s. This is not possible if cortex’s model is directly based on the sensory world. For example, as I was previously describing monkeys swinging through trees you were likely imagining a rainforest canopy, but you were unlikely to be sitting in a rainforest. Your cortex has the (potentially unique) ability to “untether” from your immediate sensory environment and model something else. This is the true superpower of human brains.

Untethered human brains can thus communicate and coordinate more effectively than any other species, and they can also imagine things that do not (yet) exist in the world; things that would make it easier for them to survive, i.e. tools.

From millstones to spears, early tools met practical needs and expanded the physical abilities of humans. Yet as humans relied more on communication, the limits of gesturing and vocalizing created a new demand. Humans needed a way to make transient speech permanent, to record and distribute what they communicated to each other. What began with pictures and ideograms, would evolve c. 5,000 years ago into the first visual depiction of the mental abstractions created in cortex, the first text, and the rest is (by definition) history.

Text, like any technology, has always had constraints. Although humans have an innate drive to learn to speak and understand spoken language, learning to read and write text still takes years of practice. However, the benefits of being able to create a record of human thought was very desirable and text steadily became more efficient to produce and share, culminating with the invention of printing press. When we entered the digital age, text became a medium for the mass distribution of human thought.

Is there a future for text?

While text arose from (ephemeral and local) limitations of human speech, technology has recently made these limitations obsolete. It is now possible to record spoken language and distribute it instantly to anyone around the world. Is digital voice the future (and thus end) of text? Unlikely. Reading is fundamentally different than listening. It uses a different sensory modality, vision, that is optimized to scan and search through a distributed “image” of information. Speech is serial, and although it can be accelerated and searched with digital technology, the auditory modality itself has different capabilities. Visual representations of human abstraction will always have a role in conveying, composing, and consuming human thought.

The brain-based Future Of Text

Current text technology still has limitations. Reading and writing is still confined to two-dimensional surfaces, a constraint of the technology for text production that has persisted since the first stone tablets to our modern screens. However, as we learned from rats, human thought is grounded in a representation of three-dimensional space. When the region of the brain that stores this model of space, the hippocampus, is damaged in a human, then they are unable to ever store new information; they never form new memories. What have we sacrificed by restricting our tools for recording and sharing human thought to two- dimensions? What will happen if text transcends the screen?

Imagine three-dimensional documents, leveraging advances in spatial computing (AR/VR), in which following a hyperlink brings you to a new location, but retains its relationship to all other locations.

Imagine a programming environment where a branch actually branched, or where a memory address referred to a location in space.

Imagine if we had a tool for composing human thought that was based on how the brain creates thought.

Alan Kay

The Future Of Text

We are the species that has invented itself the most, by creating the exogenetics of language and culture which carry our continual further inventions to each other over time and space to invent and reinvent our futures.

But “inventing the future” doesn’t always mean a better future. Most ideas are mediocre down to bad, and when carried onward and outward have catalysed futures we now regard as unfortunate histories. Bad ideas in an age that can replicate and proliferate most things almost without limit produce ever lower “normals” that take us far from our best impulses and interests.

In our time many new technologies provide seemingly “more natural” substitutes for literate discourse — for example: telephone, radio, television, chat/twitter, etc. — in other words, technologies that allow oral modes of thought to reassert themselves. We need to ask whether “oral societies” in the past or as they are seeming to reappear, are in the best interests of human beings. It has often been observed that many of the properties of “civilisation” are inventions — including deep literacies — that are at odds with what seem to be our genetic endowments, but instead try to provide alternatives which lift all of our possibilities. In other words, “civilisation” is not a state of being but ongoing attempts towards “becoming more civilised”: the next level of human exogenetic evolution. It is the very unnaturalness and artificiality of “civilisation” — compared to the closer to our genetics oral cultures — that is its strength. One way to think of modern education is that its main goal should be to help children take on and become fluent with these “unnaturals” that allow us to cooperate and grow in so many more ways.

Plato had Socrates complain that writing robbed people of their memories, and allowed bad ideas to be circulated even after the death of an author, who could no longer be chased down and “argued right”. But both loved irony, so we should note that Plato was using writing to make this argument, and I think was hoping that readers would realise that anyone who wants to remember is given a great boon by writing because it provides so many more perspectives that are worth putting into action between our ears rather than storing them in a page on a shelf. Writing merely forces us to choose whether to remember, and gives us much more in the bargain when we decide to do so.

Another part of Socrates’ complaint worth pondering is that writing doesn’t allow dialogue and negotiation of meanings between humans, and in fact seems to preclude what he considered reasonable argument. Again, Plato in presenting this idea was also one of the earliest inventors of the new structures needed to allow written description, exposition, and argument, and uses many of them throughout the dialogues. It’s hard to imagine that he didn’t realise full well that he was showing one strong way to present arguments in writing by having Socrates argue against the idea.

Not taken up by Plato are some of the additional gifts that writing almost magically adds, even as it looks like one word after another, just like speech. Beyond transcending time and space, writing allows much longer and more intricate arguments to be presented, especially when the errors of copying no longer have to be guarded against, for example via the invention of the printing press. This property was noted by Erasmus and his friend Aldus Manutius the printer when they decided in the early 1500s to put page numbers in books to help longer arguments refer to earlier and later parts (this was quite a few years after the first printed books appeared, and years later than the marginalised Jewish culture which used page numbers for the very same purpose in studying and cross-indexing the Talmud).

Most subtly we need to ask: just what is it that happens to our brain/minds when we learn to get deeply expert in something that was not directly in our genetic makeup? And especially if we get deeply expert in a number of very different ways to use language beyond the telling of stories and keeping accounts?

McLuhan, Innis, and Havelock were the most well known who started to ask how human thinking is not just augmented, but fundamentally changed, by writing and reading, and how this affected what we call the growth of civilisation.

One facet of this path that McLuhan didn’t explore—he started as a literary critic—was the idea that a new way of thinking could be invented/co-evolved, embedded in language, especially in written language, and when learned fluently would be almost like adding a new piece of brain—a “brainlet”—that could take us far beyond biology. There are lots of these now, but a simple example would be the calculus, which allows a type of thinking even the geniuses of antiquity could not do. Much of mathematical thinking “piggy-backs” on our normal language workings which, in computer terms, is universal enough to allow much more expressible and powerful higher level languages and ideas to be run on a much simpler mechanism.

An enormous such piggy-back invention is Science, which as Bacon called for just 400 years ago in 1620, is a collection of the best methods and heuristics for getting around our “bad brains” (what he called “Idols of the Mind” that endlessly confuse and confound our thinking). This larger notion that Science is about much more than just poking at nature, but is very much about dealing with our mental deficiencies, has been sadly missed in so many important quarters.

A really large context for these perspectives is how “architecture” (felicitous organisations of things and ideas) can qualitatively elevate the simplest of materials to undreamt of heights. For example, it was hard for most humans to contemplate the idea — and now the vetted reality — that life itself is an amazing organisation (and only an organisation) of just 6 simple kinds of atoms plus a few more trace elements.

Similarly computers can be made entirely from just a single kind of simple component which does a comparison: if both inputs are true then the output is false, otherwise the output is true. The rest is “just organisation” of these elements. A powerful approach is to set up the components so they can manifest a symbolic machine (software), and the software can then be further organised to make ever higher level software “machines”.

And this brings us to the large arena of Systems: organisations of dynamic intercommunicating parts that are found at every scale everywhere in nature and in the inventions of nature’s creatures. To take “systems perspectives” is such a new set of ideas and methods that they are not found in most standard curricula for children, despite that “the systems we live in, and the systems we are” are intertwined, and include the cosmos, our planet, our societies, our technologies, our bodies, and our brain/minds: all united by systems perspectives.

Germane to our topic here is that systems organisations do not fit at all well with our normal use of language, and especially our deep needs to explain by stories, which have beginnings, middles and ends. Systems are most often displayed as large charts that organise visual and textual languages in such a way to simultaneously relate views of parts and communications showing their relationships, which most often include “loops” so that most systems don’t have beginnings or ends or just one path to take through them.

Systems are inherently dynamic, even when they appear to be in repose, so to understand them it is also necessary to be able to take them forward and backwards in time. The circularities and complexities of systems — as with Science — very often defeat our normal commonsense ways to think, and we need help of many kinds to start to grasp what might be going on, and what might happen.

Two examples that are critical right now are epidemic diseases and our planet’s climate. Our normal commonsense reasoning, much of it bequeathed by our genetics, is set up for the visible, the small, the few, the quick, the soon, the nearby, the social, the steady, the storied, and to cope. Epidemics and climate are not like these. It is hard to notice and take seriously the beginning of an epidemic or the climate crisis early enough for something to be done about it. Nothing seems to be happening, and normal commonsense thinking will not even notice, or will deny when attention is called to it. McLuhan: “Unless I believe it, I can’t see it”.

Scientists also need and use tools to help them think because they have the same kinds of genetically built brains all humans are born with. One of the uses of mathematics is to compute progressions that are difficult to envision. For example compound interest grows exponentially beyond our unaided imagination, but we can easily calculate the growth. Epidemics have similar properties, and can be also be calculated. Despite this, governments and most individuals are constantly surprised and underprepared for both epidemics and what taking on debt implies.

The climate is a much more complicated set of interrelated and difficult to understand — or even identify all of — systems, and simple calculations using simple math doesn’t work. But one of the most important properties of computers is that their media are dynamic descriptions of any number of relationships that can be progressed over time: they provide the lingua franca for representing, dealing with, and understanding systems of all kinds.

This is a new kind of literacy, and though it is a kind of mathematics, it is sufficiently different enough from classical mathematics to constitute a whole new mathematics that is also a new science.

And yet it is still piggy-backed on the kinds of languages humans have used for 10s of thousands of years, but with new organisations that take what can be represented and thought about much further.

To return to the climate: in the late 50s, Charles Keeling did the first high quality science to accurately measure CO₂ the main greenhouse gas in our atmosphere (without it the Earth would be about 60°F (33°C) cooler. Five years of measuring produced enough accurate data to build the first models of what was going on. This was enough to prompt the US NSF to issue a *warning* in 1963 that the planet was very likely be in deep trouble in less than 100 years, and it was time to start mitigating the problem.

A recent study has shown that even with the meagre super-computers of the 60s (which were literally 10s of millions of times slower than a single iPhone 6), all the climate simulations done back then have proved to be accurate within a few percent. Thus science and computing did their jobs to provide (as of 2020) about 55+ years of accurate predicting of the future that we are now just starting to cope with today. Many things could have been done starting back then, but were not.

In the context of the present book, this is one of the most important “futures of representations” — to be able to represent, simulate, and understand complex dynamic systems, especially those literally concerning life and death. This was already invented and in use by a tiny percentage of the world’s population 60 years ago. To paraphrase William Gibson “The future was already there, but just not distributed evenly”.

And it still isn’t. For example, Plato would certainly have appreciated the irony here of me writing the description of a very important future for representations on a computer — which is the vehicle for this future — and though everyone now has one, they will not be able to experience an example of what I’m describing because they will wind up reading it in a book printed on paper (or a computer simulating a paper book). Hard to beat this for “not distributed evenly”!

Now we can do a much better job of simulating extremely realistic futures — including both wonderful and dire ones — years ahead of time. But one of the oldest stories we know in our culture — that of Cassandra — is again being acted out in front of our noses.

Not all uses of language and writing need to be elevated. But any culture that abandons the more difficult higher levels to just embrace the easy and predominately oral uses of language, is not just throwing away the past, but setting itself up for a most dismal future.

Alessio Antonini

The Future Is Text: The Universal Interface

Text and reading have never been as pervasive and central as today. We live in a stream of digital revolutions pushing reading at the centre of our lives and activities. The result is the emergence of a new role for text as the all-purpose interface. This trend leads to a future made of text, where everything is mediated by text and in which everybody is directly and indirectly involved in the production and consumption of more text. The production of text is already collectively amplified, for instance, considering as texts receipts, reports, manuals, frequently asked questions, to-do lists, memos, contracts, chats, tags, notes, descriptions, emails, invitations, calendar appointments and so on. Therefore, the Future Of Text lies in the re-definition of text “craftsmanship”, focused on enabling and facilitating a text-mediated access and interaction with the relationships, functions and actors of the reality we live in.

To move forward, we have to take a step back and analyse how and why we found ourselves drowning in this proliferation of text.

Unexpectedly, new media did not marginalise text and text augmentation through, for instance, hyperlink and images is not yet considered vital. On the other hand, the use of text to augment all other media (i.e. text description, alternate text, tags) is widely used and, in some cases, it is essential or a legal requirement. Indeed, the use of text is necessary for archiving, retrieving media, and for accessibility by both humans and machines.

The result is that today, text is the cornerstone of our global digital, fast-paced world. Indeed, requirements concerning accessibility and media management are not the only reasons for the use of text. Interestingly, text is also the response to the growing complexity and range of application of information communication technologies. Interface design is rapidly progressing toward a full conversion from flashy buttons, animations, icons, images, audio and video to plain text. This process, combined with the systematic process of digitisation and hybridization of daily life objects with digital technology, results in a progressive translation of identity, objects, activities, places and material and conceptual artefact in a text-based form.

It is not surprising as text can describe anything, sourcing from broad common or specialist knowledge, or be used to define brand new languages.

In this scenario, authorship did not fade away in a co-production relation with readers, but evolved into a complex, articulated activity. Indeed, individual authors are replaced by teams of specialists on narrative, media, social engagement, engineering, graphic design, research and gaming. Furthermore, the author’s responsibility is now extended through a growing set of media features to the experience of reading.

The complexity of modern authorship is well represented by the production of audiobooks and social media. Audiobooks are the result of the involvement in the production of writers, media editors, soundtrack composers and voice actors, while a complex orchestration of users and algorithms in the creation, selection, presentation and timely recommendation is needed for creating and delivering social media.

Everyone, as reader, is already embedded in a world where text is a gateway. Reading is not any more simply consuming text but a trigger of events ranging from new social connections, to liability, buying and selling, change of policy, recommendations or new contents logging our activities we may or may not be aware of. Furthermore, reading cannot be avoided as it is the most prominent and common modality of interaction: accessible, asynchronous, distributed, ubiquitous and general.

Text is a technology not any more exclusively aimed at supporting human communication. Text is used for encoding and decoding human and machine communication, define new ideas and activities, foster action and interactions and operate on the human, social and material sphere we live in. Thus, the Future Of Text is far beyond natural languages.

In this vision, text tools should support re-configuration of terms and structures as mappings of a material or conceptual system that could be of any sort, such as a narrative world or a service. Text tools should also support the definition of readers’ interaction with text, and therefore with systems which text is the interface of. Authors should be able to define and readers to understand effects and implications of reading, for progressing a story or, for instance, accepting a contract (e.g. terms and conditions). Text tools should provide authors with precise, explicit control over conditions and implications of text, and readers should be enabled to foresee sense-making and social processes “enabled by” and “made accessible” through text.

Looking to text, we should be able to “read” the interactional features of a text addressing questions concerning:

In conclusion, text and paratext as they are cannot define or express interactional capabilities. Addressing this fundamental limitation should not be further delayed as we are already immersed in a world in which interactions with text, material or digital, have implications beyond the understanding and control of both authors and readers. This is partially the result of what is defined as surveillance capitalism, but also of a legitimate tension of innovation for achieving more with less and toward extending accessibility and inclusion to everyone. Thus, rather than ignoring the present we live in or looking back with sore eyes, we need a collective effort in breaking free from a vision of text grounded in classical studies and paper media since today nothing is outside the new digital world.

Alex Holcombe

Combining The Writings Of Researchers, In Their Millions

We now have more text than ever before. And thanks to the internet, far more people are writing text than ever before and broadcasting it across the world (

While the growth in the proportion of the population that writes is wonderful, this development has had some negative side effects, such as the proliferation of fake news. In large part, these result from an inability for machines to distinguish text written by people who are informed and those written by non-experts, trolls, and bots.

Traditionally curation of reliable information and expert opinion is done by journalists. This is a highly labor-intensive enterprise relying on networks of institutional and personal trust. Journalists rapidly produce large amounts of relatively trustworthy information in text, but the numbers of professional journalists are falling. To make matters worse, their public esteem appears to be low: fewer than 50% of Americans express confidence in the news media acting in the best interests of the public.

Another profession that produces large amounts of relatively trustworthy text has actually maintained high esteem in the eyes of the public, despite attacks by politicians. I’m referring to scientists and other academic researchers. In total, over 2.4 million scholarly articles are published each year (White, 2019), with most of them written by academics. The text in these journal articles are not only authored by experts, but also they are vetted and reviewed by peers, which often results in improvements and the removal of some errors.

How much influence do these texts have on decisions by citizens and citizens’ beliefs regarding scientific matters? They can have an appropriately large influence, often thanks to the journalists and policy-makers who read some of them. On the other hand, some attitudes in the general populace fly in the face of scientific consensus. Climate denialism persists, an anti-vaccine movement has risen in the United States, and homeopathy has many adherents. A consensus or near-consensus view of scientists can be outweighed by other sources on critically important issues.

The reasons are varied, and there is no single solution. One problem is that there is not an efficient mechanism to surface the considered opinion of groups of expert researchers, and this is exploited by those with money and vested interests, such as tobacco and fossil fuel companies.

In a quest for what they describe as balance, journalists traditionally quote researchers on “both sides” of an issue. Unfortunately, this has even been the case on issues for which the overwhelming majority of scientists hold the same view. In such cases, the practice of quoting just one researcher on each side, however, gives readers the impression that there is more uncertainty or disagreement than there actually is.

Writing with a deadline, journalists may not have time to correspond with more than two or three researchers. In such circumstances, it is difficult for them to determine whether the overwhelming majority of experts believe one thing or another.

Where such supermajorities exist and a few years have passed, many findings become standard textbook material and can gain the status of facts. But for relatively new topics or ones for which a consensus is slow to emerge, we have not had a way to combine texts to create a larger voice representing the expressed views of large numbers of researchers.

Perhaps artificial intelligence, which already can rapidly read large numbers of scientific articles, can extract claims made therein and assess the degree these texts express support for them. In the course of doing so, they ought to be able to collate many facts about, for example, chemical properties and reactions. This, it has been suggested, would create many millions of dollars of economic value ( But the fact that this has not yet happened, despite some effort, should give us pause.

Certainly one reason progress in this area has been slow is that the availability of scientific text to machines has been hindered by paywalls ( But this is changing as the proportion of the scientific literature that is open continues to increase.

The startup has devised algorithms that assess how many scientific articles “support” or “contradict” a target article. This is basically sentiment analysis of scientific articles talking about other scientific articles. When someone is interested in something that is encapsulated by a published article, they can use this tool to get a quick read on scientific sentiment about it.

But many topics and claims of interest are not nicely encapsulated into a single scientific article. Where an entire scientific article is not the appropriate unit of analysis, AI must extract claims from the text of articles, but it may not yet be capable of this.

For some topics, AI may never succeed in determining what researchers believe, not because of any ultimate limitation of AI, but simply because for many topics of interest, in text researchers give only an extremely rough indication of their beliefs. Even opinion pieces, which are nominally written to express one’s opinion, fall short.

An opinion piece is often thought of as capturing what the author thinks or believes. But to aggregate the opinion of many such authors, one needs to extract the propositions they have in common and assign a number to the strength of belief. This number can be referred to as a credence or subjective probability that a proposition is true. Settling on even a rough number from the varied rhetoric within text is difficult, and the author may not have carefully considered how strongly they believe in the position they are advocating. People can learn to assign well-calibrated probabilities to propositions however (Tetlock & Gardner, 2016). This most often manifests in real-world behaviour as betting.

As of this writing, multiple projects are eliciting probability judgments from researchers, both with and without betting (; A typical question posed to researchers is whether the findings of a particular study will replicate if the study were to be conducted again. The credences thus elicited can be useful to corporations considering a particular hiring practice, governments considering implementing an intervention, and pharmaceutical companies looking for promising drug targets from the many candidate targets frequently announced by researchers in the scientific literature.

It is not clear how this might be scaled up and become a normal scholarly activity that signals the views of experts on topics of broad concern to the public. Academic research is niche-driven and individualist. It is not a system that encourages large-scale collective projects. However, this changed decades ago in high-energy physics as it became apparent that fundamental discoveries might no longer be made without combining the efforts of hundreds or thousands of researchers. More recently it happened in genomics, and it’s begun in psychology.

New institutions have sprung up to manage large-scale collaborations, with associated decision-making processes. These processes in some cases include a mechanism for soliciting the opinion of their members, for example through voting. It is here that opportunities exist for researchers to make their voices more heard, not necessarily by speaking as one, but by simply making it clear how many believe one thing or another, and to what degree.


Amaranth Borsuk

Embodying Text

The Future Of Text is in our hands. I mean that both figuratively, as a call to action, and literally, as a reminder that text comes to us through the body, often at arm’s reach. Sometimes auditory, sometimes tactile, text always requires the engagement of our sensorium, some transfer between body and surface—whether we are gazing at a monument, palming a paperback, running a finger along a braille page or touch screen, or listening to a live or recorded performance. As artist Mel Bochner reminds us, “language is not transparent,” it is a material—one we develop to meet the needs of each society in which it arises—and the portable storage and retrieval devices through which we distribute it take shape from both the materials ready to hand and the desires of writers, readers, scribes, publishers, and myriad other players who have their hands in it. Whether we are looking at clay tablets, khipu, or bamboo scrolls, the medium through which text is transported in turn shapes the way we read, and even shapes language itself. Text, which we currently cannot access without the intervention of some medium or other, is shaped to our hands, our arms, our ability to stand or sit, hold or look. Text requires interfaces to meet the faces that meet it.

This tactility is braided into its etymology. Text comes from a Latin word that means “to weave, also to fabricate, especially with an axe” (Watkins, 92). As impossible to reconcile as weaving and chopping may be, this proto Indo-European root, teks-, reminds us of the intertwining of creation and destruction at the knotty root of language. Words can build or destroy, can bring about the rapid development or the demise of civilization. They can be targets for censorship and destruction. And they can be a web linking distant thinkers and spreading ideas or ideologies. Those who toil in language are thus engaged in net work, and it behooves us to think about the dimensions of that net: what it holds, and what (or who) passes through.

Text’s future must be tied to the human body. It has and will continue to adapt to the body’s needs, and those who design the interfaces through which we make contact with it must keep this in mind as they build architectures of interactivity including and alongside the printed page. History has shown that as the book changes form, multiple media co-exist for centuries—just as the scroll and codex met the needs of different audiences for more than seven hundred years, the codex and touchscreen are not yet ready to cede their roles as transporters of language. The fact that text’s material forms don’t simply give way to one another in a series of tectonic shifts should come as no surprise since, technically, the same etymological root from which text arises also gives us technology—the terms are associated by craft, as techne like weaving and other arts provided an object of study for those ancient Greeks wishing to give them the systematic treatment of logia, tatting their knowledge into treatises for others to unbind.

The pretext with which I began is both factual and forward-looking. Even in its digital forms, those current and those to come, text is at our fingertips. Its digitization allows it to rapidly adapt to the needs of readers with unique embodiments and requirements, for whom voice-based assistants, text-to-speech, and adjustable type size are but a few of the many affordances of electronic devices. Text meets the body where it is, and in so doing it has the potential to become tissue, a second skin, something that appears in and on the body (for good and ill). Yet we currently imagine text as disembodied—pure data—as if it were possible to separate the two. This is true in the development of text technologies, particularly those that analyze “corpora” of digital written texts such as Google Translate, which renders it across languages through algorithmic analysis. And it is also true in popular consciousness, as when the media describe Russian interference in the 2016 American election as the work of “Russian hackers,” which makes it sound more like a data breech than the creation and promotion of carefully-worded ads intended to sow division on hot-button issues. Treating textual data this way elides the very real human conditions under which it is written: material, physical, cultural, financial, and political—all of which are essential to a deeper understanding of how a given text operates. In our everyday lives, the ability to imagine digital words disconnected from human bodies leads to a host of problems, from academic citations that erase the work of women and writers of color, to the badgering behaviour of internet trolls. In an era of wall-building in which spin and subterfuge untwine text from context, warping its meaning, language can make space for subtlety and nuance. My hope for the Future Of Text is that, as in the past, it will adapt to us as we adapt to it. That it will bring the body—of writer and reader—back into view in all its difference and complexity.


Amira Hanafi

Text For An Opaque Future

In school, I was taught a subject called reading comprehension. You had to read a text and then answer questions about what you’d read. These exercises are both a test of the reader as well as the text. They instruct on the interpretation of symbols on a page as much as they generate an expectation for the text’s transparency.

Nationalism grips many around me, who fervently want to know where I’m from. I’ve regularly encountered these folk since my childhood in the 1980s; they have recently multiplied. I’ve come to understand that they’re asking me to caress their anxieties by providing a category to which I belong. They demand transparency, but whatever story I tell remains incomplete.

“Obscurity is not lack of light. It is a different manifestation of light. It has its own illumination,” says Etel Adnan, in conversation with Lisa Robertson. The LCD screen is backlit, because it cannot produce its own light. It needs exterior illumination to produce a visible image.

This brief text makes an argument for opacity.

Language is material, manipulated by tongue, pen, code. Words are concrete; the written text is opaque. The digital text is unique in that it can be edited at any time, if you have access to the back end. Changes in print leave more obvious traces.

“The grotesque body, as we have often stressed, is a body in the act of becoming. It is never finished, never completed; it is continually built, created, and builds and creates another body. Moreover, the body swallows the world and is itself swallowed by the world,” writes Mikhail Bakhtin on the texts of Rabelais.

Édouard Glissant describes comprehension as a violence, “the movement of hands that grab their surroundings and bring them back to themselves. A gesture of enclosure if not appropriation.” I am not afraid to be misunderstood; I am afraid my texts will be comprehended. I’m searching for a language that, as Luce Irigaray describes, “appropriates nothing. It gives back as much as it receives, in luminous mutuality.”

Language is used material, and used again. That is, all language is other people’s words. Everyone is passing characters between them. Language is process, always becoming in context. This text makes a claim for a future in which reading and writing converge in agonistic assemblage — a carnival of collaboration.

“We demand the right to opacity,” after Glissant.

As a social practice, language produces: selves, power, and affect. The future text distributes its capacity to change power relationships. It privileges interaction over comprehension. The future reader is multiple; they write through touch and feeling. Our relations are terrain for invented surfaces.

Amos Paul Kennedy Jr.


Anastasia Salter

(Un)Proprietary Texts

The Future Of Text is waiting for us in the (un)proprietary: the (un)owned, dis-corporate, open source and remixed space that waits outside the disappointing ecosystems of social media’s towering platforms. This future must indeed transcend platforms, and break text free of 2020’s proprietary binds and binaries: the boxes that seek to confine text to different ecosystems, that couch visible text into endless, unreadable metadata, that make text unreadable outside of predefined tools. In 2020, text frustrates and eludes us: uneditable PDFs; unwatchable Flash poetry; difficult to export social media data; abruptly canceled Twitter bots, existing only in archives or not at all. Text also aggravates in aggregate, appearing truncated on social media platforms, made uniform and flattened in hierarchy by the algorithms overseeing access. Mechanisms of making, sharing, amplifying, and remixing text are more available than ever, and offer the illusion of populism and equity: however, the proprietary mechanisms and platforms on which these acts occur must be transcended for any of that vision to be realized. Those who imagine the Future Of Text must reject closed platforms, and acknowledge the role that platform regulation (and algorithmic manipulation) has played in warping the reception of text in 2020—a moment when a false claim regarding a COVID cure can recirculate rapidly to 17 million views and counting [7], and thus drown out texts providing scientific guidance in a time of crisis. The text platforms of the future must push back – they embrace the historical foundation of the cross platform, distributed, web, and resist encapsulation.

Communities on the web invested in their Future Of Texts have already modeled the way forward: one of the most compelling such projects, fanfiction community Archive of Our Own, offers a feminist model for communally tagged, collectively preserved, cross-platform accessible text at incredible scale, hosting over six million texts as of July 2020. Casey Fiesler et al points to the project as an exemplar of feminist HCI, built upon “accessibility, inclusivity, and identity” [2]. A communal commitment to sustainability and infrastructure amplifies these values and builds trust from the text-makers who in turn shape the archive’s reach and impressive embedded histories. Meanwhile, the web is filled with warnings of what happens when similar communities of textual making entrust their practices to proprietary platforms and corporate interests. Consider the fate of tumblr, once hailed as an essential space of marginalized sexual identities and trans texts, abruptly censored and rendered irrelevant by new corporate policies [1]. Even without such corporate intervention, passive deaths of texts on similar platforms take their texts with them: from the 2017 closure of locative discussion site Yik Yak, to the 2015 death of Facebook-esque Friendster, to the many smaller communities whose deaths go unremarked, text is continually fading out of reach. The texts that should be easily saved and ported from such deceased portals are frequently completely forgotten, rendered obsolete or inaccessible by combinations of proprietary formats, abandoned databases, and archaic tools for user archiving and export.

Nowhere is this need for the (un)proprietary more apparent than in the making and distribution of born-digital texts that make inherent use of the affordances of hypertext and the interactive web. I see this (un)proprietary, rebellious, Future Of Text foreshadowed now in Twine, in Ink, in Bitsy: in open source platforms for the making of hypertext, with grammars of choice and linking continually extended and reimagined by their users [4]. These are not the systems of the past: this is not Eastgate’s StorySpace, with its gated editor and unplayable past works. This is not Macromedia, then Adobe’s, Flash, offering the allure of easy integration of text, interaction, and imagery only to crumble under the weight of its own ambition and security flaws [5]. The tools we need for this future must be communally owned, maintained by makers, and shaped to circulate free of walled gardens (such as Apple’s App Store) that can too-quickly render a work obsolete and unreachable.

Let 2020 serve as a warning that if text is to have a future, it cannot be relinquished to closed platforms. We must learn from our past challenges, and from the difficult work of preservationists trying to save proprietary digital texts from extinguishment. Acts of documentation and archiving, such as Stuart Moulthrop and Dene Grigar’s “Pathfinders” series of traversals, are a starting point necessitated by the text platforms of the past—the platforms of text-future should instead strive to transcend the need for static conversion [3]. Efforts such as the Internet Archive’s efforts to save the records of platform past (such as the epic Google+ preservation project) [6] should serve not merely as a model for how text can be preserved, but as a warning to all who make text with the future in mind that the proprietary will always demand (and resist) intervention, while the (un)proprietary offers a path forward for sustainable access.


  1. Carolyn Bronstein. 2020. Pornography, Trans Visibility, and the Demise of Tumblr. TSQ 7, 2 (May 2020), 240–254. DOI:
  2. Casey Fiesler, Shannon Morrison, and Amy S. Bruckman. 2016. An Archive of Their Own: A Case Study of Feminist HCI and Values in Design. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16), Association for Computing Machinery, San Jose, California, USA, 2574–2585. DOI:
  3. Stuart Moulthrop and Dene Grigar. 2017. Traversals: The Use of Preservation for Early Electronic Writing. MIT Press.
  4. Anastasia Salter and Stuart Moulthrop. Pending. Twining: Critical and Creative Approaches to the Twine Platform. Amherst Press.
  5. Anastasia Salter and John Murray. 2014. Flash: Building the Interactive Web. MIT Press, Boston, MA.
  6. Ben Schoon. 2019. Google+ public posts will be saved to Internet Archive. 9to5Google. Retrieved July 28, 2020 from
  7. Isabel Togoh. 2020. Facebook Takes Down Viral Video Making False Claim That ‘Hydroxychloroquine Cures Covid.’ Forbes. Retrieved July 28, 2020 from

Andy Matuschak & Michael Nielsen

Timeful Texts

The most powerful books reach beyond their pages—beyond those few hours in which they’re read—and indelibly transform how serious readers see the world. Few books achieve such transcendent impact, yet given their physical constraints, it’s remarkable that any do. As a medium, books have no direct means of communicating with readers over time. The physical text is stuck on the page, generally read linearly and continuously in a few sittings.

To be transformed by a book, readers must do more than absorb information: they must bathe in the book’s ideas, relate those ideas to experiences in their lives over weeks and months, try on the book’s mental models like a new hat. Unfortunately, readers must drive that process for themselves. Authors can’t easily guide this ongoing sense-making: their words are stuck on pages which have already been read. How might one create a medium which does the job of a book, but which escapes a book’s shackled sense of time? How might one create timeful texts—texts with affordances extending the authored experience over weeks and months, texts which continue the conversation with the reader as they slowly integrate those ideas into their lives?

Each year, hundreds of thousands of students study Molecular Biology of the Cell. The text presents endless facts and figures, but its goal is not simply to transmit reference material. The book aspires to convey a strong sense of how to think like a cell biologist—a way of looking at questions, at phenomena, and at oneself. For example, it contains nuanced discussions of microscopy techniques, but readers can’t meaningfully try on those ways of thinking while they’re reading. Readers must carry this book’s ideas into their daily interactions in the lab, watching for moments which relate to the exercises or which give meaning to the authors’ advice. This model of change is brittle: the right situation must arise while the book is still fresh in readers’ minds; they must notice the relevance of the situation; they must remember the book’s details; they must reflect on their experience and how it relates to the book’s ideas; and so on.

As we consider alternative approaches, we can find inspiration in the world’s most transformative books. Consider texts like the Bible and the Analects of Confucius. People integrate ideas from those books into their lives over time—but not because authors designed them that way. Those books work because they’re surrounded by rich cultural activity. Weekly sermons and communities of practice keep ideas fresh in readers’ minds and facilitate ongoing connections to lived experiences. This is a powerful approach for powerful texts, requiring extensive investment from readers and organizers. We can’t build cathedrals for every book. Sophisticated readers adopt similar methods to study less exalted texts, but most people lack the necessary skills, drive, and cultural contexts. How might we design texts to more widely enable such practices?

Guided meditation smartphone apps offer a promising design approach. Meditation’s insights unfold slowly. Aspirants are typically advised to practice daily. Over weeks and months, they may begin to experience the world differently. Some concepts in meditation make no sense if you still can’t perceive your breath clearly, so they’re best introduced later. A book on meditation isn’t well-suited to this kind of slow unfurling: much of it may not make sense initially; readers will need to re-read it again and again as their practice progresses. But a guided meditation app’s experience is naturally spread over time. Each day’s session begins and ends with brief instruction. Instructors offer topical prompts throughout the practice. Intermittent repetition can keep old ideas active in students’ minds until they’re ready to engage. Rather than delivering a bound monograph on meditation, instructors can slowly unfurl an idea over hundreds of days. Critically, these apps are a mass medium, just as books are: lessons can be “written” once and redistributed cheaply to huge audiences. Could this approach be applied more generally?

To engage with a book’s ideas over time, readers must remember its details, and that’s already a challenge[1]. One promising solution lies in spaced repetition memory systems[2], which allow users to retain large quantities of knowledge reliably and efficiently. Like meditation, these systems involve a daily practice: every day, a reader maintains their memory library by reviewing a few dozen lightweight prompts. Each prompt asks a detailed question, like “What types of stimuli does George Miller’s span of absolute judgment describe?” Each day’s session is different because each prompt repeats on its own schedule. When a user remembers or forgets the answer to a prompt, the system expands or contracts that prompt’s repetition interval along an exponential curve. These expanding intervals allow readers to maintain a collection of thousands of prompts while reviewing only a few dozen each day. Practitioners generally complete their review sessions in moments which would otherwise go unused, like when waiting in line.

Despite their efficacy, these systems are not yet widely adopted. One important barrier to adoption is that it’s surprisingly difficult to write good prompts. To explore one potential solution, we created an experimental quantum computing textbook, Quantum Country[3]. It’s written in a “mnemonic medium,” interleaving expert-authored spaced repetition prompts into the reading experience. Our goal was to help readers engage with challenging technical material by supporting their memory. As we interviewed readers, though, we noticed that the regular review sessions didn’t just build detailed retention: the ongoing practice also changed readers’ relationship to the material by maintaining their contact with it over time. These people didn’t just read the book once and proceed with their lives. Quantum Country’s review sessions returned readers to its ideas again and again over weeks and months.

Quantum Country’s prompts are designed to help readers remember technical material, but it may be possible to adapt similar mechanisms to support future timeful texts.

Consider The Elements of Style, a classic writing primer by Strunk and White. The authors demonstrate the value of parallel construction with this example from the Bible: “Blessed are the poor in spirit: for theirs is the kingdom of heaven. Blessed are they that mourn: for they shall be comforted. Blessed are the meek: for they shall inherit the earth.” But it’s not enough to read an example. Good writers’ ears become automatically alert to these constructions. They notice opportunities to create parallelisms, and they notice dissonance when similar phrases are presented differently. It takes time to develop this awareness.

What if, a week after learning about parallel construction, a reader’s review session included this prompt?

The following quote has been modified to remove a parallel construction. How might you rewrite it to add parallel construction? How do the two differ in effect?

“My fellow Americans, ask not what your country can do for you. Instead, reflect on what you can contribute to these United States.” (modified from a quote by John F. Kennedy, which would be revealed in its original form once the reader had finished with the prompt)

A week or two later, a similar prompt might appear with a different distorted quote. For example: “You can fool all the people some of the time, and you can mislead many listeners consistently. But you won’t be able to fool everyone reliably.” (adapted from Abraham Lincoln)

Such prompts might seem onerous or strange on their own, but what if most educated people maintained a regular review practice of the kind we’ve described? The example reflection prompt would sit between others on physics, poetry, and whatever else you’d been thinking about. If you found the prompt useful, it would recur in another week; if not, perhaps it would reappear in a few months. A book like The Elements of Style might include dozens or hundreds of prompts like these on various topics, so that you’d see one or two each day over weeks and months as you integrate its ideas into the way you view and practice writing.

In early chapters of Quantum Country, readers see every prompt as they read the text, and the prompts remain the same in the ensuing review sessions. But in the final chapter, we added a new kind of prompt which changes over time in a programmed sequence. Perhaps future timeful texts could unfurl their contents over the course of many sessions, much as we observed that digital meditation lessons do.

Of course, spaced repetition is just one approach for writing timeful texts. What other powerful tools might it be possible to create, making future books into artifacts that transcend their pages, as they slowly help readers shape their lives?



Ann Bessemans & María Pérez Mena

New Reasons To Design Type

Type designers are often asked the question, whether there are not enough typefaces yet. On the one hand Laymen ask the question about the meaning of new typefaces and then by the look of them they ask what is new about it (Unger, 2006). The fact that the very ordinary and very original can appear to be compatible in one design (Lommen, 2003) does not make things any easier for the “non-designer” reader. Letterforms do not draw the attention of the reader as long as they keep their visual consistency. While the variation among the existing typefaces seems to be too limited to design with for some designers, for others it appears to be too large. Within the design community it is sometimes claimed that three typefaces should be enough to design with, but it seems to be impossible to agree which three that would be.

Although many may think that the conditioned forms of the letters leave little room for the creative input of the type designers, the reality is rather the opposite. There is sufficient space for building up personal structures on these foundations (Unger, 2006). Type designers not only have the knowledge, but also the insight to come up with new type designs. At this point, the question of whether there are not enough letters can be considered irrelevant. The reasons for designing new typefaces are just as diverse as those of all other art disciplines (Blokland, 2001) and, by extension, of each of every artefact that humans have designed so far.

Type design can have either an aesthetic/expressive or an ergonomic motivation. If the starting point of the project is originated by a rather aesthetic/expressive motivation, the design contributes to the diversity in all possible forms in which letters can exist. In line with this, we find around 20% of the type production being driven by a “free idea” of the designer (Blokland, 2001). There is great freedom within this idea, its application is less defined and the requirements are vaguer. The value of this concept lies in the pleasure of making the letters with the main purpose of expressing beauty as well as entertaining the senses. Such type designs are more in line with the approach in the liberal arts. This way of designing comes the closest to what is generally thought about type designs (Blokland 2001: 65).

In contrast with this 20%, 80% of the type production encompass the design of new typefaces: (1) for a specific application or (2) for the adaptation of an existing font, for example, a new use or a new application (Blokland 2001: 63). The application of the new typeface is clearly outlined and its requirements are clearly stated. The aesthetic quality is one of the design components that the designer needs to consider, but this is not necessarily a priority when type needs to be designed for a specific application or adaptation.

In the last decade, this 80% of the type production has witnessed the introduction of a blooming scientific approach to typography and type design that has created its own category within it. This category belongs to the new typefaces that are created within the academic framework of a doctoral dissertation (or research project) and aim to enhance specific issues for specific target groups. The design process of these typefaces are preceded by an in-depth research of the nature of the matter from those different perspectives that cover that particular issue and are involved in the act of reading. This process is therefore orchestrated by the interdisciplinary approach that lay the foundations of the science of reading research. As type designers we have the inestimable opportunity and the ethical responsibility to contribute to society by creating new artefacts that better suit the needs that arise every day in any community.

When letters are designed functionally -this is, when ergonomics precedes aesthetics-, they intended to represent an improvement. This way of designing corresponds better within the approach of the applied arts. Type design is here a means of generating knowledge: it brings thoughts, ideas, images, from one mind to another (Warde, 1956). The latter forms the core of typographical science. Therefore, the discussion may arise whether a typeface is or is not an art form since, as referenced above, its first goal is not always necessarily express aesthetics and entertain the senses.

Because type designers mainly design mass products, it can be said that typography is in the domain of ergonomics. That makes typography an ergonomic application. Herein the mass products, letters, are adapted to the human physical, in this case to the properties of the eyes and brains. Type design must be adapted to this so that readers can rely on the letters and take in the content. This means that type design and typography is more than just an art discipline. Not only does the type designer/typographer have a self-satisfaction with the end result, but there is also satisfaction with the contribution of effectiveness for the target audience (Tracy, 1986).

As a type designer, you constantly think about your readers and try to represent their needs and interests (Unger, 2007). Through a combination of aesthetics, experiment and legibility, the type designer tries to come up with his sense of designing type to new fundamental forms where unity prevails. The typeface is capable of fulfilling the fundamental skill of reading, however the content prevails. Letters are not unimportant, but it should be noted that the content / language will always be stronger than its visual form. After all, the letters remain just a means to represent and transfer content.

The use and design of typefaces is both a matter of taste, feeling and even responsibility. The fact that letter shapes are inexhaustible as a source of interest and pleasure is something to be thankful for (Tracy, 1986). Type designs can serve special needs, for example for a specific purpose or separate target group. New letters are used as fuel for new typography, design (Blokland, 2001) and insights into legibility. New type designs not only contribute to the design world but also to the scientific world because it provides insight into the complex concept of legibility.


Andries (Andy) van Dam

Thoughts On The Present And Future Of Text

As a child I was obsessed with books, and internalized what an amazing invention the written word was. After becoming a computer scientist I first became interested in how computers could recognize hand printed numerals and text, but switched my interest upon being stunned by Ivan Sutherland’s film on Sketchpad and the vision it opened up for real-time visual communication with computers through display consoles. This was a radical vision in an era characterized by batch computing on mainframe computers using punched cards, and it launched me into a more than 5 decades’ career in interactive computer graphics. Via a chance encounter at a computer conference, Ted Nelson turned me on to the idea of using displays for creating and exploring hypertexts. The 1967 Hypertext Editing System that he, my students, and I created on Brown’s IBM mainframe with 2250 vector display with a light pen and function keys had the dual purpose of experimenting with hypertext while being the first free-form (string- rather than line- or statement-oriented) text editor/word processor. Since then I’ve been an evangelist for electronic books in the full sense of the term: hypermedia corpuses with simulation-based interactive illustrations. My research group and I have been building systems for authoring and active reading of such collections of hypermedia documents (webs, in effect) ever since.

In the last half-century, display-based visual communication has prospered, indeed displaced much of textual communication, for informational and instructional purposes as well as for amusement. In the ‘60s we evangelists thought of reading and writing online to allow the real-time augmentation and updating of documents, their effortless and instant distribution (thereby saving trees), their use to facilitate multiple points of view, and many other benefits. But we didn’t foresee a massive shift from text and text-dominant print media (including textbooks) to visual “soundbites” such as Facebook and Twitter posts, YouTube voyeurism and short instructional videos, nor Photoshopping, fauxtography and deep fakes playing havoc with the concept of visual truth. I am both amazed at what our technology can do these days, e.g., high-end graphics cards use to create real-time special effects that can’t be distinguished from actual photographs/videos, and at the same time profoundly depressed by the societally harmful uses to which the tech is all too often put. I mention but a few observations below and speculate a bit about whether we’ll see text survive as a communication medium that lends itself to thoughtful, grounded discourse.

1. Attention span/deficit: having essentially instant access and ability to context-switch through the docuverse via searching and following hyperlinks has created a tendency to “butterfly”, i.e., to flit from a superficial scan of a page or two to a related or even unrelated context (e.g., checking your email or various social media). My students today report how hard it is for them to concentrate, to deconstruct linear argumentation in a book or even a lengthy article (print or online). I confess to looking at the length of a particular article before deciding whether I have time or interest to read it, and have trouble staying in the groove regardless.

2. YouTube videos are increasingly used, indeed preferred, for instruction both because they are visual and short, thus easier to digest than prose that must be laboriously deconstructed, let alone prose that has mathematics in it! Instructional videos are typically not curated and many of them are of dubious quality, a more general problem of online content.

3. As information continues to grow exponentially (a trend noted in the 1945 landmark Bush article “As We May Think” that is the foundational document of the hypertext field), there is ever greater proliferation and fragmentation of sources, and increasing disappearance of trusted point of view. This is true of mass media as well as for more scholarly communication. The number of journals and conferences in all fields is increasing dramatically, but few are of genuine archival quality subject to proper peer review. “No one knows you’re a dog on the internet”, but conversely, the democratization of information production is all too often a quantity versus quality trade-off. Need we be reminded of the profound lack of quality control and taking of responsibility for the content of tweets? There is a sea of information in front of us but most of it not potable.

4. In the early days of online documents, our model was active reading, e.g., the equality and intertwined nature of authoring and reading. Most people aren’t going to produce archival content but at least they should be able to annotate and comment and have threaded discussions. While there are some platforms for such active reading, it’s not a universal capability of most authoring applications. As a consequence, I have to choose between the authoring environment I want to be in vs. an easy-to-use commenting and annotation capability, let alone be on some kind of threaded discussion, without full authoring capabilities. I can’t get good authoring and good annotation in one application, as I had in research systems half a century ago. Those closed systems were also self-sufficient, all encompassing; nowadays we must manually integrate a variety of applications and even ecosystems never designed to interoperate. Life has gotten harder, not easier, because of the expanding technologies. To address these issues my research group is working on our n’th (spatial) hypermedia system. Facilities include good organizing and annotating/commenting tools, and interoperability with common document composition systems such as Word and Google docs, to the extent enabled by their APIs, to augment their feature sets.

5. While I’ve been evangelizing e-books with interactive illustrations for more than half a century, I have never attempted one, nor can I point to any exemplars. I realized early on as a textbook author that while writing and polishing a print medium linear textbook is typically a decade long martyrdom, designing and implementing such an interactive e-book that would fulfill all the promises of a hypermedia web, would be an order-of-magnitude even more work. Furthermore, it would require an interdisciplinary team of subject-matter experts, writers/editors, interactive illustration and UX designers, testers, and other members of a multi-disciplinary team akin to teams creating modern video games. Forming and sustaining such a team would be a time- and money-consuming enterprise well beyond the scope of academic authors. Furthermore, there is no business model (unlike with games) that would allow such a multi-year investment.

In summary, while it is easier than ever to produce hypermedia content, in the future we will be ever more swamped by the amount of content available, without quality standards and trusted point of view. Those of us still treasuring the written word in particular will have to fight for a proper balance between textual exposition and immersion on the one hand and visual (and aural) communication on the other. The “next shiny object” phenomenon will produce a second or third wave of AR/VR/MR madness as a more immersive experience than that provided by text, but I believe thoughtfully (and tastefully) authored text will endure, even when augmented by machine-learning enabled text generation of both fiction and exposition.

Anne-Laure Le Cunff

Textual Maps

“A book is not an isolated being: it is a relationship, an axis of innumerable relationships,” wrote Jorge Luis Borges in Labyrinths (1964). In most traditional books, the nature of the text as the fruit of an infinite dialogue is implied. The author’s thought processes—their schemas, their mental models, and the decision trees behind their work—are kept private, hidden in notes and drafts. Academic literature may be the rare exception where explicit thought processes are somewhat incorporated in the final text. Yet, for a variety of reasons—such as confirmation bias, long-established convention, honest oversight, or even conscious manipulation—these after-the-fact reports are often skewed or incomplete. For instance, the methods and procedures section of a research paper will only feature a linear description of the final protocol. The author’s train of thought is not captured. Beyond static academic references, no associative trails are included. It is left to the reader to speculate what alternative routes were discarded. The result is a bounded, conjectural map of the discourse which shaped the published text. To fully actualize humanity’s collective intelligence, unbounded textual maps are needed.

From the quantum text in The Garden of the Forking Paths (Borges again, 1941) to the memex in As We May Think (Vannevar Bush, 1945), the concept of endlessly exploring ideas through associative trails is not new. But it was not until the 1960’s that technology finally started to catch up. Breaking free of the printed book’s paradigm, the advent of digital documents and hypertext has made it possible to build explicit paths between ideas. Wikipedia is the crown jewel of modern textual maps, surpassing H. G. Wells’s dream of a “world brain”. However, the potential for digital text to help us navigate one another’s mind is still largely underexplored. Unbounded textual maps ought to allow readers to explore ideas through space and time. Dynamic and interactive, they should record intertextual relationships, the connections between intratextual ideas, and the evolution of atomic ideas through time. Akin to neural pathways, their metaphorical roads—in the form of hyperlinks and metadata—should be non-linear and bi-directional, with collective marginalia functioning as figurative town squares. Lastly, textual maps should not be confined to the borders of a specific domain: they should empower the reader to cross and build bridges between a wide range of knowledge areas.

Most of the technology to forge this reality is already available. In a sense, the software development platform GitHub is a bounded garden of forking paths, enabling developers to create new branches (parallel versions of a repository), edit their content, and potentially merge them back with the main version. Some authors have already started writing books using Git, the distributed version-control system powering GitHub, in order to keep track of the evolution of their ideas, or to work collaboratively.

Bi-directional links are not a recent invention either. ENQUIRE, a software project written in 1980 by Tim Berners-Lee at CERN, which is considered by many a precursor to the World Wide Web, featured bi-directional links. Beyond creating basic associative trails, these links also described the relationship between connected ideas, such as was made by, includes, is part of, uses, is used by, is described by. But the bi-directional atomic approach to textual maps did not make it to the World Wide Web. In the words of Tim Berners-Lee, the Semantic Web “remains largely unrealized.” Linkbacks such as webmention and pingback have scarcely been adopted by online writers. Beside the hidden “What links here” manual search function, Wikipedia does not offer bi-directional links. Rather than paragraph-level connections, linking to whole pages is the norm.

Cognitive overhead is the main obstacle to the proliferation of unbounded textual maps. Manually adding metadata, creating explicit links, and maintaining versions is time-consuming. Machine-readable formats have been historically difficult for humans to comprehend and interact with. Because transferring ideas from the mental to the digital space adds an extra step to the creative process, knowledge workers tend to only publish the final output of their reflection. Nevertheless, recent years have brought an explosion of human–computer interaction technologies which will drastically reduce the cognitive overhead of creating and navigating textual maps.

New tools for thought—which are increasingly turning into algorithms for thought—let readers import any text to not only augment their reading experience, but to connect intratextual and intertextual ideas as well. Closer to the way the mind works, these metacognitive tools turn readers into cartographers by enabling them to comfortably create supplemental metadata, connect ideas together at the paragraph level, navigate implicit links, add comments, and remix content. Moreover, artificial intelligence has already started to amplify human intelligence, merging two fields historically studied in separation (John McCarthy’s Stanford Artificial Intelligence Laboratory and Douglas Engelbart’s Augmentation Research Center at Stanford were founded almost at the same time). Thanks to sophisticated knowledge graphs and neural networks, search engines have become sense-making engines, helping chart, connect, and explore infinite textual maps. However, many are proprietary: the largest textual map in the world lives in data centers belonging to Google. Such tools offer a controlled experience crafted by a biased superintendent. The reader-as-cartographer’s frame is still bounded to the heuristics of a predesigned sandbox.

What is more, the cognitive overhead of creating unbounded textual maps has not been eliminated yet. Working with textual maps currently involves proactively transforming thoughts into readable symbols. Capturing and connecting ideas across a multitude of versions is energy-consuming. For all the talk about how neural interfaces may enable mind reading between people, the technology is still a long way off. Most recent mind decoders rely on muscular intents rather than imagined speech patterns to identify words, and the ones that do decipher brain waves have been trained on tiny ad hoc languages.

Even so, technology is advancing. While creating and editing textuals maps by mentally expressing a thought is a faraway dream, our understanding of the motor cortex will soon bring about the ability to navigate textual maps with our minds—the engineering of a motor cortext. By unlocking knowledge, fostering collaboration, and encouraging ideation, textual maps will one day capture the “innumerable relationships” of ideas to infinitely expand humankind’s imagination.

Anthon P. Botha

The Future Of Text. A Mind-Time Journey

Text is our symbolism for encrypting the beauty in our minds for eternity to reach the soul of others. We use poem and prose to paint the images of our imagination contextualised in the morphology and syntax of our languages. Our understanding and sense-making is transferred in many forms to build a cloud of knowledge that we all plug into. With text, we endeavour to create an image for all to see, even for those that do not experience the object described. The synapses that fire in our brains ignite the digital switches of our media. We see without observing, smell without sniffing, hear without listening, taste without eating and feel without touching. Such is the power of text.

Mind-time travel takes us seamlessly from the past to the present and into the future and back. It is a product of our ability to do future thinking. Let us embark on such a journey to see how text and its quest for exact communication developed and may evolve.

The primate senses its environment, thinks contextually and uses mime to communicate an episode to fellow beings. Through evolution, linguistic capability and speech evolved over millennia, and knowledge was shared through paths of the narrative. Visual messages in rock paintings and carvings have lasted for thousands of years, but the interpretation thereof is open and subjective. Then script evolved and through writing, readers are taken through the scenes of experiences into their own surrealistic imagination that make them feel they share in the event. There is still scope to adjust to own tastes and preferences, giving faces to people that do not exist or are not known and cultivating own inner emotions while immersed in the story. The printing press brought massification of sharing this script and capturing it for generations to come. The typewriter and word processor enabled text-based media to proliferate. In film, text is visualised. The scriptwriter and director capture in text their mental image of the story they want and pass it on to the actors that recreate the persona and scenes and embed it in the minds of the viewers.

How will we share knowledge in future? Will it be visual, like our ancestors did with rock-art? Will it be storytelling like traditional knowledge kept flowing through generations using only narratives? Will it be in explicit text, in a universal language that most people will understand? Will it be instantaneous, with a blink of an eye? The best way to transfer knowledge has still to be found, partly because knowledge is so evasive, so undefinable and so critical to our existence. Until now we have created and standardised alphabets. The letters are strung together like atoms in a molecule to form words and the words are linked like a macromolecule to form a substance. With the many combinations that are possible we can use a limited alphabet to describe what we experience and know. Yet, when we use words to tell a story, it is an image we see or remember that is described. Since there is no prospect of a common language that is ideal for knowledge transfer, meaning is ambiguous and influenced by culture, exposure and education.

Has our own advanced development made knowledge transfer more difficult? Before we could read or write as children, we could draw images of what we experience and think about. This is the instinctive representation of fact and emotion. Even though our drawing skills differ, it is easy to recognise symbolism. The language of the future may be visual, eliminating text as the carrier of meaning and knowledge. We currently see elements of this, like emoji, with which emotion is communicated over languages. Books are audibly presented to enable readers to become listeners again. Will the movie, even the immersive 3D movie, be replaced by virtual world experiences? Will machine translation from one language to the other be replaced by speech to image technology? Will text to image translation be possible without the intervention of humans as intermediaries?

Our paradigm of ‘knowledge capture through text’ will prevail for long, since text is so versatile, so precise in meaning, so beautiful in sculpturing the visions of our minds, so powerful in its own symbolism. Text is the key that unlocks our imagination. Yet, a present trend is to circumnavigate text. The Internet has connected humanity in a vast co-experience of the visual world. It is quicker to watch and listen to a short video clip than to read. Guide books and manuals are becoming visual through augmented reality. In a cyber-physical world, we attach data to things. Slowly, but surely other means of visualisation of knowledge will co-exist with text, and eventually, may replace it totally. When will we see the last generation of readers of text?

Visuality in knowledge transfer will mean speech to image, text to image, thought to image, thought co-processing instantaneously sharing images in our minds... These images will be the models of our minds, the objects of simulation, the records of our conversations. They may be 3-dimensional, holographic constructs, objects we can label and recall, string together with a syntax of their own. They will have self-organising capabilities, morphing to truthfully represent events, and be universally accessible, integrated into virtual worlds that we can experience with our senses. They will have the unique property that meaning is captured in a way that all that experience it, are seeing the same thing. This will give us less misinterpretation, less subjective manipulation, more clarity and better definition. Where will that leave us as human beings? Are our psyches ready for it? With technovolution, the combination of biological and technology evolution, we have reached a point where we build such powerful tools and reduce time of change so drastically that we may have to redefine what humanity is.

We do not have the technology to do this yet. However, if we dream it, we can do it. If we want it, we will get it. Will it be good for us? The answer lies in the same realm as asking whether artificial intelligence will be good for us. It is what we make of it that will count.

In this short journey of mind-time travel, we dreamt of big things to come, but I have used the only tool currently available to me – text. Text may never disappear; text may just be augmented by the new representations of knowledge. I could use text to catalyse your thoughts and inspire your own images of what the Future Of Text may be, and assist you to dream…

Azlen Elza

The Architecture Of Writing

What can writing learn from techniques of cinematography, or the artful meal preparation of a skilled chef? Could a musician composing music out of notes and rhythms be working through similar processes to a writer composing an essay or story out of words and paragraphs? As we delve deeper into what I call the architecture of writing, let yourself drift away from what you know about writing and open your eyes to new perspectives, new approaches, and new tools for thinking about, practicing, and growing your own craft of writing.

I like to think of writing as a sort of alchemy, a transmutation of ideas into words; to write well is to be skilled at translating your thoughts onto paper. Every discipline is like this: writers translate thoughts onto the page, painters translate onto a canvas, and architects onto the built environment. Although each has their own tools, methods and materials with which to express or sculpt their vision. The core process is similar across every discipline, at heart we are all simply communicating in different forms and through different mediums.

There is so much to be learned at the blurry boundaries between disciplines. If we are all simply communicating with different tools, these tools can be treated as analogies when brought into different contexts. Take painting for example: what is the writing equivalent to quick brushstrokes on a canvas? or waiting for wet paint to dry? or how can you write with colour?

These questions are like constraints, but not typical constraints. Oulipo, a group of French writers in 1960 used a number of constraints in their work, ranging from replacing nouns with the seventh noun after it in the dictionary to not using the letter “e”. These constraints are methods, processes for how you go about writing or manipulating words that can spur creativity. However, asking a simple question like "how do you write with colour?" goes beyond to be a constraint for creating new constraints, a path to discover new ways of thinking about writing—I will elaborate more on this in a moment.

On face value, using the analogy of colour as a constraint for writing leads me to experiment with different sound and word associations: desire and wrath may have a feeling of redness, luxury of purple, and void of black. To experiment further with this idea I ran a short workshop where I first asked participants to write an introduction of themselves, then each chose a different colour to brainstorm words that feel yellow, green, blue or purple. Then I mixed both exercises, asking them to rewrite their introductions in the colour that they chose. As an example, here was my original introduction:

Hello my name is Azlen, I am a programmer and designer interested in the future of interaction, education and writing. In this course I am interested in exploring together what writing can learn from other disciplines!

And then rewritten in the colour “red”:

Beware Azlen, squasher of bugs and manipulator of metaphors. The fire in my chest pulls me to forge new forms of interaction and ponder how we learn and write. At the heart of this course burns the question: what can writing learn from other disciplines?

This is just a simple example, you and I might have very different approaches and styles to the constraint of writing in color. Reading through each color-introduction from the workshop, each participant is worthy of example. This one for the colour "black" was fantastically playful and poetic:

Marking hyperlinks in ink, I link things I'm thinking, sinking, shrouding sharp darkness with artistic marks. Writing at midnight, birthing from a dearth of earth the husks of dusk, reading fleeting blanks and pages deep, turning rage asleep, learning burns, time unwinds and blinds, I find a nebulous web of densest edges, unhook books and erase ageless spaces as mages weep.

I mentioned this question “how do you write with colour?” goes beyond a constraint on process—it is an entire creative framework. Once you figure out how to write colour into your words and paragraphs you can begin to consider the wider discipline of colour theory. What about lightness or saturation? Or how can writing in these different colours convey emotions? Or perhaps, can you design a colour scheme for an essay?

By bringing in analogies from other disciplines like this, you can apply your existing knowledge and skills to writing. It gives you a framework to experiment and ask new questions.

Maybe the Future Of Text—of writing—is to unlock the possibilities which already exist and explore what can be learned from other disciplines. Each discovering our own unique styles and techniques. Let us create new forms, new ideas, new techniques, new tools for constructing palaces out of words. Let us experiment, let us be free, play and experiment with the infinite landscape of writing.

Barbara Beeton

The Future Of Technical Text

Text, in the sense considered here, is not limited to words, but also comprehends symbolic communication that can be represented on a visual surface, e.g. mathe- matical and technical notation. Until the late 1900s, communicating such material in digital form was all but impossible. A few computer programs existed to convert highly coded input to visual output that, when printed, had the form familiar to readers of traditional technical books and journals. The input, however, was all but unintelligible to anyone not trained in the particular coding scheme.

This changed starting in the late 1970s, when computer scientist Donald Knuth developed a system that was not only capable of producing print output of a quality equal to that of the most carefully composed technical books, but is also expressed in the jargon familiar to mathematicians and other practitioners of technical disciplines. This system is called TeX.

Originally intended for preparation of Knuth’s own writings, to be used by only him and his secretary, TeX caught the eye of other computer scientists, who obtained copies of the software and took it with them to their own institutions, where it was noticed by even more potential users. When it became obvious that TeX was not going to be limited to just his own use, Knuth reformulated the code in a computer language (Pascal) more portable than the one orignially used (SAIL), explained its workings using a technique he termed “literate programming”, published the entire system in a five-volume series of books, Computers & Typesetting, and placed the result in the public domain, making only two requests: that the name “TeX” be reserved for the original program (or copies that passed a defined test), and that only he could make changes.

Only one requested change has been adopted by Knuth: TeX’s use for languages other than English was simplified by an extension about ten years after the initial release.

Forty-plus years later, a quite extensive community has grown up around the TeX kernel, with reportedly millions of current users. The original TeX had a memory limit of one megabyte. As hardware became faster and memory less restricted, and with the availability of Unicode and new font and printer technologies, the software capabilities expanded as well, with these new features implemented in TeX’s “offspring”:

ε-tex increased available memory and added useful debugging and reporting features;

• pTeX adapted TeX for typesetting Japanese; in turn, upTeX upgraded this for use with Unicode;

• pdfTeX produced pdf output directly, and added typographical niceties such as hanging punctuation;

• XeTeX made it possible, and easy, to use system fonts instead of fonts created especially for TeX use, and provided the ability to directly input Unicode-encoded text;

• LuaTeX incorporated Lua, a lightweight, high-level programming language designed primarily for embedded use in applications, for the purpose of adapting the native TeX algorithms for easier handling of Arabic and other languages with nonlinear composition rules;

• a HarfBuzz library was integrated with XeTeX and LuaTeX to make available advanced font-shaping capabilities;

• other extensions under development . . . .

As possibilities expanded with the typesetting engine, the limitations of browsers and other office tools became more and more obvious, and mathematicians were eager to be able to communicate electronically without first typesetting their thoughts. But no existing code provided the wealth of symbols necessary for such communication. A group of math and science publishers formed the STIX Fonts project, and presented a proposal to Unicode that resulted in the addition of codes for several thousand new symbols. One significant feature of this addition was the recognition by Unicode that mathematical notation is a distinct language, e.g., a script H representing the Hamiltonian is not the same as an italic H representing an ordinary variable, and a distinct code is required for each in order to enable text searches as well as accurate communication. Moreover, it was recognized that scientific notation is open-ended, so future requests for additions need be accompanied only by evidence of use in published material from a recognized publisher.

That took care of linear math expressions. However, much of math is not linear but two-dimensional, and, according to Unicode, that requires markup for its proper presentation. The TeX representation is fully linearized, in essentially the same way as a mathematician on one end of a telephone line would speak to a colleague on the other end. The LaTeX representation is also linear, although different from basic TeX in several important ways. (The LaTeX conventions are easier to program, but not as natural for a pure mathematician.) But the same math expression can often be input in more than one way in either form; and almost no software other than a TeX engine can natively “understand” all possible variations. Enter the *ML proponents. MathML was created to provide a structure compatible with browser conventions. Unfortunately it’s even less readable by a human than either TeX form, while suffering from the same “multiple possibilities” problem. So comprehensive searches are difficult, and likely to be incomplete. Even though a large body of scientific literature is now archived electronically in these formats, accurate searching on math expressions remains an open problem.

In addition to the problems of inconsistent representation, the future of technical text may depend on the ability to conquer bit rot. Computer files have nowhere near the permanence of clay tablets, which, after all, only become more indestructible when subjected to fire.

Belinda Barnet

There Must Be A Different Way To Pay

At present our most important stories are splintering, dispersed across thousands of tiny screens and multiple mediums, competing for attention on the tiny bright pulpits we are allotted by each platform. Some of us have more than one pulpit; Twitter for well-turned words, Insta for feeling inadequate, LinkedIn for looking more employed than we really are.

Algorithms written by companies whose GDP is more than most nations get between us and the pieces of our stories, deciding what we can see and when we can see it based on a smog of data. Our data. The data they have extracted about us when we thought we were just connecting with friends. These algorithms are invisible.

The result is not just that our stories are atomised, vacuum packed into 280 characters for a 5” screen, but that they are told between product placements. Products that are carefully selected for us based on more personal detail than any other human being knows.

How can we get out of this strange mess? I’ve come to like the little pulpits, particularly Twitter, but not the knowledge that every click and every morsel we read is commoditised, fed into an advertising machine. I’d like to think there is an alternative way of paying for our platforms - an alternative to paying with our own data. Open-source the platforms for those stories and set up a nano version of micropayments.

This is a problem I hope the next generation solves.

Ben Shneiderman

Text Goes Visual, Interactive And Social: The Dynamism Of The Screen

Text Goes Visual

From cave scrawling to screen scrolling, authors have sought to convey their ideas and engage readers with images and text. The right blend of images and text was designed to capture attention, convey information, and change readers’ minds. At times images and text were separated, but there were artists and authors who persisted in finding the right blend of pictures and words.

Sometimes the drawings, maps, or photos provided the meaningful content, while the words offered instructive commentary. Other times well-crafted arguments, reports, or stories delivered the author’s message, while the pictures gave overviews or illustrated key ideas.

Gutenberg’s moveable type printing press expanded possibilities for text during the 15th century, dramatically expanding the production, while lowering the price, so as to create vast markets. These opportunities led to blossoming creativity by new authors who daringly wrote new books to explain the world to growing audiences who became literate because reading was such a powerful skill.

Photography, invented in 1839 by Daguerre and Fox-Talbot, radically changed the balance of pictures and text. Travelogues brought distant landmarks and landscapes to wider audiences, who now knew what the sphinx looked like and how the Amazon forests were filled with dazzling plants and trees.

The balance seemed to tip in favor of photos. French painter Paul Delaroche declared that “from today, painting is dead”, but he failed to account for the remarkable creative potential of artists who found newer non-realistic landscapes, portraits, and ever more abstract forms. Photographers found vast new markets and artists opened up new creative possibilities.

Improved publishing methods lowered costs, dramatically expanding the audiences for fact and fiction. Charles Dickens became an internationally famous 19th century author, eclipsed by J.K. Rowling’s success, whose Harry Potter books continue to engage readers globally. During the 20th century, the triumph of comic books and graphic novels opened up new possibilities for images and text, pushed further by compelling movies and cartoons that reached even larger audiences.

Text Goes Interactive

The World Wide Web again transformed what was possible, dramatically expanding audiences, while offering new creative possibilities. Web pages smoothly integrated text with multiple layers of graphics, photos, animations, sound, and videos, while giving readers fresh options of hovering, clicking, and dragging.

For a while, hypertext visions suggested a world where readers took some control away from authors. However, linear storytelling has remained dominant, but it is enriched by scrollytelling in which users swish back and forth smoothly through a story. Compelling animations, such as background layers cruising by while figures and photos drift in to amplify the text, are more engaging for many readers than traditional page turning.

Sound and spoken language are well-supported by interactive technologies, adding richness to images and text, while enabling access by users with disabilities. The startling variety of user-composed videos combines sound and images, sometimes even text, to support storytelling and factual presentations.

Virtual, augmented, and mixed reality methods will open up new possibilities for storytelling and information discovery. The largely visual methods might gain greater acceptance if designers found ways to accommodate text more effectively.

Text Goes Social

Poet Dylan Thomas’s tiny writer’s shed reminds us of the lonely experience that was the reality for many writers. Getting published took years, sometimes bringing visibility and maybe some thoughtful reviews, but often authors languished in obscurity. Now writers can email draft chapters to trusted colleagues or post them for open discussions. Tweets about completed works can trigger chain reactions in hours, while reviewers post public comments with spirited, sometimes hostile, remarks. Images and text are more open and more social than ever. In the extreme case, shared documents technologies enable groups to collaboratively author business reports, magazine articles, and illustrated books. On a grander scale, Wikipedia represents a remarkable collaborative project that is continuously updated, spawning versions in almost 300 languages.

Future Directions

Certainly there is a dark side to visual, interactive, social, and open access technologies. We believed that they would enable marginalized populations to have a voice, but we did not account sufficiently for the violent, hate-filled, fraudulent, criminal, or terrorist productions that are likely to go viral.

Openness is largely a virtue, but platform designers and application developers will have to find balanced approaches to limit the malicious actors who produce and promote these pernicious productions.

The clear message of this brief review is that more people are more creative more often than ever in history. The potential for huge audiences and directed messages to family, friends, neighbors, and colleagues continues to promote many kinds of literacy. These opportunities also encourage “authoracy” – the ability to organize and present ideas using images, text, sound, animation, video, and new media.

The increasingly powerful tools for creating and disseminating ideas, while stimulating discussion has the potential to increase individual self-efficacy, community building, and support societal goals, maybe best exemplified by the 17 UN Sustainable Development Goals. And maybe it becomes reasonable to ask how better communication can respond to the climate crisis?

Bernard Vatant

Quantum Mechanics Applied To Text

One century ago, Quantum Mechanics (hereafter QM) radically changed our view of the world, questioning the very nature and even existence of what we kept nevertheless calling casually the physical reality. After the QM revolution, we cannot consider any more this reality independently of the process through which we observe it, every observation being an interaction. A similar conceptual disruption is currently happening in the way we consider text, shifting our focus from static objects we used to call documents, to dynamic interactions with an ever-changing, inter-connected textual reality. Although we interact with it on a daily basis, this new text reality looks so elusive we tend to call it virtual, as a matter of fact a word and concept introduced in physics by QM vocabulary1.

Before, between, and beyond interactions with writers and readers, a text can be viewed as–using again QM parlance–a superposition of possible states2. It has always been the case, quite obviously, of the text just before it is written. The text in limbo is some fuzzy state of the author’s mind, with all sorts of potential expressions. At the very moment of text production (supposing this moment can be pinpointed in time), all those potentialities collapse to a single written text. Most of the time today, this happens through a keyboard, one character at the time, or more and more often through a voice recognition system. Tomorrow we can expect smarter and smarter brain-computer interfaces allowing direct translation into machine language of brain activity. But, whether the process uses ink and pen, chalk and blackboard, wood stick and clay tablet, finger in the sand, or brand new brain-computer interfaces, what happens at writing time is akin to a wave function collapse.

At reading time, another reducation is happening, and not that much different of the one described above at writing time. Among all possible interpretations of the written text, each reader is choosing one, depending on as many parameters as were present at production time. There again, one could say it had always been the case even with static documents. But something quite new and conceptually disruptive emerges in the digital world, and singularly by the Web protocols. The text interpretation is now prepared before reading, based on parameters used in content negotiation. The text pushed through the user interface is more often built on the fly than stored somewhere as a static object. The many parameters involved in this process, known or unknown from the reader, make the text which is eventually read a realization of one among a growing number of possible states.

Quantum Mechanics led to such a severe conceptual disruption that it was difficult to accept, even by its very inventors, and is still today subject to various weird and contradictory interpretations. Let’s bet it will be the same for what happens today with text. The death of the document as a static page has severe consequences, not the least being a radical questioning of the nature of authorship. If the text comes to reality at reading time, the writer-reader distinction becomes moot. A text realization becomes the result of a process in which both writer and reader are agents, but are not the only ones, the intelligence built in the text production and distribution system playing a growing part. In a theory of text inspired by QM, the writer, the system and the reader would be entangled, not considered any more as separate objects3.


Bob Frankston

Text Is The Future

In my January 2019 column (“From Hi-Fi to CLI” I wrote about the return of the command line. The command line never really left but for years the focus has been on the Graphical User Interface (GUI) because it made it easy for people to interact with computers and applications.

The limitation with GUI is that it stops at the screens. If you want to look beyond the screen you need a language for the concepts and mechanisms. Pictures and pictographs go only so far. We use words to capture the rich concepts and represent them as text.

The real power of words is that they are the means by which we organize our understanding of the world. The term “solar system” captures the idea of a heliocentric system of planets. It’s not very different from the words we used to describe the constellations in the sky that gave us the ability to navigate by looking at the stars. The constellations could then act as guideposts.

Some languages may use pictures, but they are treated as abstract symbols capturing the sounds of spoken language. While today’s Western languages use a small number of symbols, even languages like Chinese are symbolic rather than pictographic.

Given this we shouldn’t be surprised that programming languages are text-based. We use them to communicate with computers in much the same way as we communicate with other people – exchanging symbols that represent concepts while retaining essential ambiguity. We can use a word like “horse” without specifying what kind of horse. While it leaves room for misinterpretation, any effort to be more specific itself could distort the meaning.

The practice of explaining concepts to computers, AKA, programming, has the same characteristics. Software has been given to us (or is) a language for talking about language. The terms we use are by necessity anthropomorphic because the way we organize, and leverage concepts is similar.

While there are significant differences in the way a brain and a CPU interpret symbols, we gain insight from the similarities. The big difference, perhaps, is in the idea “everyone knows” and “goes without saying”. When we’re working with computers, we have to be more explicit in these assumptions and we can bring this explicitness back into how we communicate with people. Programming techniques such as procedural abstraction is similar to structuring recipes into components and giving them structure.

As we increasingly communicate online by use of text, we’ve developed a vocabulary for the purpose. While it is common to apply emojis to bring affect into the conversation, they are more like seasoning for the conversation.

What may be new is explicitly adding smarts to the text similar to the way URLs give the ability to use words as references. But they don’t replace the need to take responsibility for the narrative text.

Programming has used text as the representation because it preserves and gives us a path to the abstract concepts through which we see the world. Perhaps, in the future, our understanding of software will give us a way to give more life to text as we share the world with our automatons. Street signs will be written to be read by people and by cars with an awareness of the different needs of each.

Text won’t go away but will expand as we share the world with our creations.


Bob Horn

Explanation, Reporting, Argumentation, Visual-Verbal And The Future Of Text

Structured Writing


In this era of overwhelming amount of complex information, we must be able to easily scan and skip what we already know. We must be able to see immediately the patterns and structure of patterns contained in the communication. Writers must be able to write documents some of which or maybe most of which Will be not be read but can be accessed easily. There are various ways of doing this. I present three.

Partial solution: structured writing (aka Information Mapping®)

Structured writing is an integrated synthesis of tools and techniques for the analysis of complex subject matters (primarily explanation and reporting) and a group of standards and techniques for the management of large amounts of rapidly changing information. It includes procedures for planning organizing sequencing and presenting communications.

For stable subject matters, you can divide all the relevant sentences into 40 categories. Examples are: Analogy, Definition, Description, Diagram, Example, Non-example, Fact, Comment, Notation, Objectives, Principle, Purpose, Rule, etc.

Some of the sentences stand by themselves in these categories (e.g. Definitions, Examples). Others make sense as part of larger structures (e.g. Parts-Function Table).

Key features

Always write in small chunks (also-called blocks ). Unlike many paragraphs, only one topic per chunk. Nothing extra. Chunks can contain several sentences. Label every chunk with an informative, relevant, short, bold-face title. Standards for organizing and sequencing large documents. Diagrams and illustrations can be chunks. Use them.

Possible to cluster most of the 40 sentence types into seven categories: procedure, process, structure, concept, fact, classification, principle. Another 160 chunk-types available for Report Documents and Scientific and Technical Reports.

About 400,000 technical and business writers world-wide have been taught to write structured writing since 1969.

Hierarchy and other sequencing

For larger structure, you can use outlining or your favorite method of sequencing thought. Don’t hide the outline from the reader. Bold face document structure subheads. Integrate your chunks in it.


• Precision modularity.

• Ease of scanning and skipping of irrelevant text.

• Ability to determine if completeness of subject is covered.

• Improves efficiency of analytic and learning processes.

• Has been applied to other discourse domains: e.g. science abstracts and reports and argument mapping.

• Greater ability to specify rule domains of components and rules for writing clearly.

• Improved decision making. Fewer errors. Sometimes improved creativity.

Visual Language

Visual-verbal integration

Text tightly integrated with visual elements, – shapes and images – is rapidly proliferating. I call this visual language and it specifically addresses the functionality question: “When working tightly together, what do words do best and what do visual elements do best?”

Diagrams are prototypical examples. The lines, arrows, and shapes represent relationships.  The words represent either process or objects. Tightly and properly integrated, they provide the best representations for mental models and for many kinds of decision making. We can no longer pretend that text exists outside of any context, particularly visual-verbal context.

Images provide concreteness that words do not provide.  Shapes provide structure, organization, connections, movement, and quantitative and qualitative relationships that are difficult to easily replicate in a purely textual document.


Various scholars have theoretical systems of the basic morphology primitives of visual language. They have also described visual-verbal syntax, levels, and topologies.

Semantic elements

The semantics of visual language combines elements from visual metaphors, diagrams, cartooning, engineering drawing, and incorporate space, and composition as well as perception and time. The semantics describes unique forms of disambiguation, labeling, chunking, clustering and distinct rhetorical devices. It is possible to distinguish between mere juxtaposition and visual-verbal integration.

Functional semantics

Functional semantics provides the ability to:

It also permits:

Thus visual-verbal language transcends the constraining effects of only using the alphabet.

The Advantages

Overall, this tight integration a visual-verbal elements enables:


For example, diagramming can provide the ability to quickly identify relationships that are difficult, if not almost impossible, to express in text by itself. Here is a simplified diagram.

KEY. Arrows = causes. Lines = influences. Letters = Events

Your task: Write out in text all of the relationships shown in this diagram.


Does your list identify all of the relationships?

Does it identify distance between events?

Does it identify all of the ambiguities of the influence lines?

How do you express the difference in distance between A-D and L-J?

How does express the wiggle in arrow G-K?

Argumentation Mapping

Disputed subject matter

In disagreement, it is possible to clarify much with various forms of argumentation mapping, a form of diagramming. Uses six categories for placing your sentences in the diagrams: Grounds, Claims, Warrants, Backing, Qualifiers, Rebuttals.

This diagramming enables a reader to easily follow pros and cons of disputed discourse. When linked in hypertext, it enables many people to contribute to the debates and to access original sources of the evidence and data supporting the claims and counterclaims. It provides a structure for disagreeing that enables better evaluation and judgment.


Information murals contain tight integration of text and visual elements in wall-size communications. Here’s one with a couple of hundred text chunks and a similar number of visual elements.


The (Near?) Future

Prediction: Humanity will continue to build its systems of thinking with structured writing and visual language, argumentation mapping and information murals to clarity and improve communication.

I look forward to invention of new words, new diagrams, an atlas of normalized diagrams and visual-verbal icons. These will provide us with further ability to both express ourselves more precisely and quickly, while portraying necessary contexts for communication.


For details of these ideas, my books and speeches:

Bob Stein

A Unified Field Theory Of Publishing In The Networked Era

These are excerpts from three pieces I wrote from 2005 to 2013. Looking backwards I think the basic premise — that the notion of text will expand to include multimedia forms of expression within a vibrant social network — is essentially correct. What I vastly underestimated was the resistance of legacy publishers to strike out in new directions and the corresponding difficulty of new makers to gain a foothold.


Although we date the “age of print” from 1454, more than two hundred years passed before the “novel” emerged as a recognizable form. Newspapers and magazines took even longer to arrive on the scene. Just as Gutenberg and his fellow printers started by reproducing illustrated manuscripts, contemporary publishers have been moving their printed texts to electronic screens. This shift will bring valuable benefits (searchable text, personal portable libraries, access via internet download, etc.), but this phase in the history of publishing will be transitional. Over time new media technologies will give rise to new forms of expression yet to be invented that will come to dominate the media landscape in decades and centuries to come.

My sense is that this time around it’s not going to take humanity two hundred years to come up with the equivalent of the novel, i.e. a dominant new form. Not only do digital hardware and software combine into an endlessly flexible shapeshifter, but now we have gaming culture which, unlike publishing, has no legacy product or thinking to hold it back. Multimedia is already its language, and game-makers are making brilliant advances in the building of thriving, million-player communities. As conventional publishers prayerfully port their print to tablets, game-makers will jump on the immense promise of these shiny, intimate, networked devices.

a unified field theory of publishing


I’ve been exploring the potential of “new media” for nearly thirty years. There was an important aha moment early on when I was trying to understand the essential nature of books as a medium. The breakthrough came when i stopped thinking about the physical form or content of books and focused instead on how they are used. At that time print was unique compared to other media, in terms of giving its users complete control of the sequence and pace at which they accessed the contents. The ability to re-read a paragraph until it’s understood, to flip back and forth almost instantly between passages, to stop and write in the margins, or just think - this affordance of reflection (in a relatively inexpensive portable package) was the key to understanding why books have been such a powerful vehicle for moving ideas across space and time. I started calling books user-driven media - in contrast to movies, radio, and television, which at the time were producer-driven. Once microprocessors were integrated into audio and video devices, I reasoned, this distinction would disappear. However – and this is crucial – back in 1981 I also reasoned that its permanence was another important defining aspect of a book. The book of the future would be just like the book of the past, except that it might contain audio and video on its frozen “pages.” This was the videodisc/CDROM era of electronic publishing.

The emergence of the web turned this vision of the book of the future as a solid, albeit multimedia object completely upside down and inside out. Multimedia is engaging, especially in a format that encourages reflection, but locating discourse inside of a dynamic network promises even more profound changes Reading and writing have always been social activities, but that fact tends to be obscured by the medium of print. We grew up with images of the solitary reader curled up in a chair or under a tree and the writer alone in his garret. The most important thing my colleagues and I have learned during our experiments with networked books over the past few years is that as discourse moves off the page onto the network, the social aspects are revealed in sometimes startling clarity. These exchanges move from background to foreground, a transition that has dramatic implications.


In the print era, the author’s committed to engage with particular subject on behalf of future readers. In the era of the network that shifts to a commitment to engage with readers in the context of a particular subject.


The editor of the future is increasingly a producer, a role that includes signing up projects and overseeing all elements of production and distribution, and that will of necessity include building and nurturing communities of various demographics, size, and shape. Successful publishers will build brands around curatorial and community building know-how AND be really good at designing and developing the robust technical infrastructures that underlie a complex range of user experiences.


Books can have momentum, not in the current sense of position on a best-seller or Amazon list, but rather in the size and activity-level of their communities.

The Future of the Book is the Future of Society

“The medium, or process, of our time — electric technology — is reshaping and restructuring patterns of social interdependence and every aspect of our personal life. It is forcing us to reconsider and re-evaluate practically every thought, every action, and every institution formerly taken for granted. Everything is changing: you, your family, your education, your neighborhood, your job, your government, your relation to “the others. And they’re changing dramatically.”

Marshall McLuhan, The Medium is the Message

Following McLuhan and his mentor Harold Innis, a persuasive case can be made that print played the key role in the rise of the nation state and capitalism, and also in the development of our notions of privacy and the primary focus on the individual over the collective. Social reading experiments and massive multi-player games are baby steps in the shift to a networked culture. Over the course of the next two or three centuries new modes of communication will usher in new ways of organizing society, completely changing our understanding of what it means to be human.

Catherine C. Marshall

A Meditation On Ephemeral Text

Most text—especially hypertext—is ephemeral. We write, and the text drifts away, carried downstream by the current of time.

More than that, most text is meant to be ephemeral, of the moment, of the mood, of the task: a shopping list, a comment, marginalia, a draft. Even literary text, written with the indelibility of transcendent art, gradually fades. Charles Dickens. Oscar Wilde. Virginia Woolf: of course some writers’ works survive, but if you spend much time browsing Google Books, you’ll see so much unfamiliar literature, art and everyday writing out its contemporaneous context. Hypertext itself has always been as much about writing as it is about reading, which is in itself a recipe for ephemera.

Over time, then, text becomes data, perhaps to be mined, perhaps to be indexed and searched, perhaps to be displayed as snippets, but seldom to be read in linear text’s fixations and saccades.

Our biggest hypertexts—the Web, the walled garden social media platforms—are ephemeral too. The Internet Archive harvests an important segment of the Web, but it’s safer to think of any particular grotto of text as protean, always changing, never anything you can return to with assurance. Social media, both public and private, is a textured landscape of mystery spots, bots, duplicate identities, and strange temporal sink holes. Its users largely don’t want social media content to persist; content is a reflection of identity, and identity is in the present, ahistorical, performative, detailed, and momentary.

As if to underscore this point, study participants sometimes bring up how an “On This Day” function serves as a potent reminder to delete outdated posts. Recent events (e.g. the accidental loss of MySpace content; a shift in policy that would remove inactive Twitter accounts) remind us that there’s nothing to guarantee any of this will be around in a hundred years.

The digital world, with its ease of creation, immateriality, and the swiftness with which stuff accumulates, then, seems particularly vulnerable to loss. Ephemera in the physical world survives through a combination of benign neglect and unreflective nostalgia. Benign neglect in the digital world isn’t as forgiving. A box of photos, stored in a dark place (under a bed or in the attic), remains intact across generations. Email, on the other hand, has a peculiar volatility. Changing institutions or abandoning one service for another is often as effective as a fire. It’s not just a problem of shifting digital formats; it’s more complicated than that. You might download your email, but what are you downloading? Attachments? Spam? Ads? How about the indeterminate cc’d communications that don’t mean much to you or anyone else? You can’t compare literary email to a bundle of literary letters in a folder.

What would happen if we were able to snapshot the digital world, and keep it all (sealed and embargoed)? After all, books are anchored in a larger infrastructure: the means of production (a shifting network of publishing, curation, and ownership) and the social and a physical world the books refer to (and that refers to the books). Most everyone agrees it would be too much.

In accumulation, it is hard to draw a line between what we want to keep, and what we want to forget.

I’ve tried to find that line for the last several decades as I’ve explored digital archiving (all while routinely losing my own stuff and not giving it a second thought). When I assert that something should be kept (the public-facing part of Facebook, say) I usually find a counterexample or a reason that I’m dead wrong. Yet in writing a biography, I’ve also experienced the vast power of ephemera to recall people, time, and places. It’s a thornier and less technical problem than we might think (although the technical portion remains formidable).

Here’s something that seems like ephemera: a paper door tag from an apartment in New York City that has been tucked between the pages of poet Allen Ginsberg’s handwritten journal. It’s part of a special collection at Stanford University. The door tag is sufficiently minor that it doesn’t warrant cataloging as a separate item in the collection’s finding aid. Ginsberg evidently grabbed it after the residents had lost the apartment: All three names on the door tag— KEROUAC, ADAMS, F. E. PARKER—have been crossed out in an apparent sequence.

If you know how to interpret it, the worn paper door tag tells a story that’s unfinished in the journal entries. Allen Ginsberg kept the door tag in his journal because he was on the outside of the group (and of the apartment), looking in, not because he was an insider. The insiders felt no nostalgia nor fondness for this time and place, at least not until fame found them. None of them would have kept this scrap. They were freaked out by a murder (the killing of David Kammerer) that had occurred in their midst, and they scattered with an imperative to forget what had happened. But plucky Allen Ginsberg kept the door tag from 421 West 118th Street with no names left on it. He’d go on to tell a different story in later years, one that’s nostalgic, one that casts the apartment as a literary salon, a social nexus he was part of: a story that is belied by the bit of ephemera he has kept.

Because it is possible to save so much digital text, some would assert that we don’t need to make the same decisions libraries and archives have made about print material. Benign neglect (and more deliberate efforts) will leave us with more than enough, as they have in the past. But digital ephemera is simply more fragile (and more prone to corporate caprice) than the leavings of the print world. If a discarded door tag has the ability to evoke so much, imagine what the rich world of digital creation might leave us.

Charles Bernstein

Immmemmorabillity Alll Ôvęr Agon

“The real tęchnøl

ögy – behīnd all öf ou

r oth

er techñologies – is ł

anguage.” The alphabe

tic revœlution: a remarkably a

ccessįble ušer

interface and an enörmøus capacity to store

retrievableinfōrmation. Gr

eek alphabetic wr

iting provided a new and better means

ffœr th

e storage and retrieval öf cultural memory. T

he earliestGrêek writing was mårked b

y the e

mergence of a new alphabetic technoløgy with

in a culture in which oral technology re


dominant fœr a few hundred years. Much of the

new ålphabeticwriting, then, w

as an aid to m

emöry, taking the orm o scripts to be memo

rized ffor subsequent perfformænce. In such scri

pted writing, the page isnøt the ffinal destination

n but a p

reliminary štage, prompts fœr final prese

ntatiön elsewhere. Suchholdover writing pract

ices might be contrasted with more distinctively

textual features of writii

ng, ones that are less bou

nd to the “transcriptive” fun

ctions of wriitiñg. Wr

iting not ønly recordslanguage, it alsö changes l

anguage – and cons

ciousness. As we enter into æ

pœstliterate period, we c

an begin to šee the boök

as the solid middle groun

d between the stage (pe

rførmed pöetry) andthe screen (digital poetry). W

riting is a storage medium. It stores verbal langua

ge. But the værious technoløgies (hierœglyphs, s

cripts, printing, hypertext) literally score the lang

uage stored. Inotherwords: Writing records the m

emøry of language just aš it explores the possibili

ties for language. If p

oetry in analphabetic culture

maximize$ its st

örage function through memor

izable language (formulåic, stressed), then poetr

y in the ageof po$tliteracy (where cultural inf

ormation is stœred ørælly, ælphæbeticælly, and d

igitally), isperhæps mos

t fully realized th

rough refractory—unmemörizæble—la

nguage (unexpected, nonfformulæic, dis-stresšed).

It iscommonplace to say that photography

ffreed painting fr

om the burden of pictørial

reprešentation, asfor centurie$ paintingß and

drawings hadbeen the primary means o

f pictöriæl and image støring (mørphing

) and transmission. Alpha

betic writing u

ltimately freed poetry (though never co

mpletely) from thenecessity œf storæge a

nd transmissiön of the culture

’s memories and laws – poetry’s epic

function. In the ageof literacy, this tas

k was ultim

ately assumed by prose. Poetry,

released ffrom this overriding obligatiœn

to memory storæge, increašingly became


fined by the individual voice, poetry

’s lyric functiøn (the persistence of epi

c notwithstanding, since epic in the a

ge of lyric bec

ömes less inffræstructure

and more art). With t

he adventöf the p

hoto/phono electrønic, postliterate age

, the emerging ffuncti

on for pœtry is ne

ither the störage of collective memory n

œr the projection of individuæl voice

, but rather an explör

ation ofthe med

ium through which the størage and exp

re$$ive functions of læng

uage work. A t

extual poetry doesnöt create language t

hat is cômmittable to memory but rat

her a memory of the analp

habetic that

iš committable tø language. This is wh

y so much tëxtūalwriting sseeemmss to return t

ö a/literate features of længuage, not o

nly in other cultures but also in our öwn. T

extuality, sounded, evokes orælity.

Cœnversely, orality provokes textualit

y (polymörphously), albeit the virtual, a

literate ma

teriality of woven sēmiošis. T

his is orality’s anterior horizon, its aco

ustic and linguisticgröund, embodied an

d gestural. The stûf

fness œf language, its v

erbality, is present i

n both writingand sp

eech but it iš particūlarly marked when la

nguage is listened to, or read, withøut the

filter of its information function. Poetry

’s social function in öur time is tø bring

language ear t

o ear withits temporality

, physicality, dynamism: its evanescen

ce not its fixed charācter; itš fluidity, n

øt its authority; its structures not its s

torage capa

city; its concreteness and p

articularity, nöt itš abstract logicality a

nd čłârìtÿ.”

Source Text

Chris Gebhardt

Decentralizing Text In A Global Information Graph

It is impossible to predict the Future Of Text and how we will interact with it, but we need to build the right structures for it to evolve unhindered. We already know some of its characteristics. It is social, contextual, collaborative, multi-layered, multi-versioned, and multi-media. Said otherwise, it is non-linear. It is self-describing, portable, fluid, and independent of infrastructure and external authority. Said otherwise, it is decentralized. Yet current mainstream technologies poorly support these characteristics. Partial solutions consistently look like inelegant hacks. We are overdue for an architectural revolution that comprehensively addresses the underlying weaknesses.

How we connect text, and data in general, is the most foundational concern. The properties of references heavily influence the rest of an information architecture. To enable reliable versioning, annotation, and layering of information, references must be stable. To remove dependency on infrastructure and external authorities, reference identities must be intrinsic, not assigned. To allow for fluidity and portability of content, references must be independent of context and location. Cryptographic hashing is the only technique for producing references with these properties. To guarantee them, we must exclusively use hashes to address units of text and surrounding data artifacts. (With cryptographic signing for provenance.) Data naming schemes must be abandoned, because named data has the inverse of every desired property! While hash-only schemes present new challenges, solutions are now known. The time has come for a complete switch.

Previous applications of cryptographic hashes to data models and network schemes have been relatively naive. The one-way property of hashes guarantees a directed, tree-like structure among connected nodes of data. In turn, most designs have been constrained to linear (single branch) or hierarchical (full tree) models. (Examples include distributed filesystems and version control tools like Git, blockchains, and countless derivatives.) Likewise, because hashes themselves are immutable references to specific data, parallel schemes have been devised for mutable references to data that may change over time. Yet there are zero technical advantages to mutable references like URIs. They are merely familiar to developers, while destroying the properties we desire. The alternative is to use only hashes but then discover, collect, and network knowledge of inbound references. Envision an append-only graph, where new data is discovered by its reference to existing data. Clients append to the graph, watch for others to do the same, and react accordingly. This history-preserving, non-hierarchical data model provides mutability without compromising reference stability. It also enables a feasible form of two-way links.

Cryptographic signing and hash-based referencing provide durable publication, something we lost in the transition from analog to digital publishing on the web. Similar to analog prints, signed hash-identified information can be widely disseminated and is resistant to removal, in proportion to its popularity. It cannot change or disappear or lose its identity if a host fails or is hacked. It is archival by default, eliminating the need for irregular backup and mirroring solutions. Creation is decoupled from dissemination. There is no distinction between a local and network copy. There are no names or links to remap. A document’s hash is the same wherever it exists. Its references to other documents are the same wherever they exist. This restores a capability we’ve never had in the digital age: reliable independent annotation. Unlike mutable URIs, hash references cannot break. And because the content is fixed, relative to the hash, no embedded anchors are needed. Third party annotators can use simple text or media coordinates, without worry that the referenced content could shift position later. Compare analog publications, where page and paragraph numbers can be cited reliably, relative to a publishing date. Annotations themselves are contained in hash-identified data nodes that hash-reference the original content. Using the previously described graph model, they can be discovered and aggregated amid networked systems. Propagation is driven by interest and author trust, individually and among communities. Without assigned identifiers like names or paths, no structural conflicts are possible. This enables another revolutionary capability: unlimited independent layering of information.

Users will control how multi-sourced information is filtered and layered into local views. Text annotations are more than margin notes. They can attach stylistic, grammatical, or semantic markup, social metadata, and translations. They can support concept-mapping, robust discourse, and multi-media composition. Layering builds context helpful to the reader. It can provide explanations, qualifications, fact checks, cross-references, and surrounding narrative. It can highlight levels of agreement and importance throughout a text, aggregating thousands of readers’ reaction annotations. With networked hash reference collection, every node of data is its own permanent nexus of interaction, encouraging further engagement by all parties. This contrasts to the controlled channels of the classic internet, which tend to be both ephemeral and either one-sided or shaped into algorithmic filter bubbles. Instead, continuous exposure to broad context inoculates against disinformation, deceptive advertisement, and harmful bias.

We will need new software designs for managing heterogeneous layered information. User interfaces must become dynamic environments of composited functionality, custom rendered for current tasks and modalities. Plain text is elegant. Yet current software architecture encapsulates text with all kinds of technology management artifacts. There are countless containers, from apps and services to myriad file formats. This entanglement ruins interoperability, re-use, and fluidity! We must decouple text and data from supporting storage and network systems. We must liberate it from standalone apps that silo it for commercial leverage. Rather than seeking to control it, computing environments should instead bring software alongside the information at hand.

Architecting the future of digital text must be a pursuit of permanence, not expedience. Clean-slate design is required, to prevent stale ideas from compromising the long-term foundations. Anything we bake in must be supported indefinitely. Modularity and extensibility are thus critical. The core standards must be as simple as possible, limited to universal aspects and designed to evolve. We should build bridges from existing internet systems, but the goal is refactoring, not continuous support of outdated protocols and data models. The World Wide Web has served us well but outgrown its architectural foundations. Let’s build a Global Information Graph.

Chris Messina

The Linguistic Inner Tube

Ever wonder: who was the first person to put the dollar sign in front of a number? Why did they do it, and when?

Moreover, what motivated the invention of currency symbols in the first place? What made it necessary to differentiate between generic counting numbers and monetary figures? Which currency symbols came first, and did all currencies race to find representation in their own unique typographic symbol? What went into the design process of these symbols? Were they on physical money first? Do currency symbols go extinct when their associated civilizations go bust?

Back to the dollar sign… why does it live in front of the number and not behind it like other currencies (i.e. like the Euro (€) or even the lowly cent (¢))?

I’m fascinated by questions like this because I’m the guy that put the pound symbol (#) in front of a word or concatenated phrase to create the hashtag.

When I came up with the hashtag back in the summer of 2007, I was pretty clear about my audience: other techies and geeks like me who appreciated clear, efficient, and clever language hacks. Since we spent so much time talking to computers on its terms (that is, via the command line interface), we were always looking for ways to pack more meaning into fewer keypresses. What’s that they say about developers being lazy?

File storage was expensive. Sending and receiving data was expensive and slow. The cloud hadn’t been built out. We didn’t have fast and reliable mobile connectivity everywhere. Consequently, etiquette demanded compressing photos and music and zipping files to lower the cost of sending, receiving, and storing them. Finding an effective algorithm to shrink down data saved you and your intended recipients money ($$$)!

As we proved that the internet could be a platform not just for business applications but for self-expression, we sought increased convenience and access. Telephony was freed from the shackles of the landline and mobile communication technology became more affordable and consumer friendly. With this growing popularity, hackers and phreaks exploited obscure nooks and crannies in this digital substrate, unearthing a hidden message bus that could be used to send 160 ASCII characters between phones. What had been set aside as a technical feature for transmitting reception strength and information about incoming calls between telephony equipment would become known as SMS (short message service).

As professionals discovered that short text messages offered greater convenience and efficiency than the formalism and effort required by voice calls, use of this medium increased (consumer adoption was delayed by the 10¢ per message stamp).

Gradually text messaging was rolled into standard cellular phone plans and texting grew in popularity, especially among teens trying to avoid being caught passing notes in class. The ease, frequency, and adaptability of texting laid the behavioural foundation for productizing SMS. The first publishing service to capitalize on this trend sported a preposterously compressed name, wholly befitting the medium: twttr.

Initially critics scoffed at the incessant self-talk and chirpy chatter of this emergent communications backwater. Sure, it was well known that humans are an especially social species, but prior to Twitter it was hard to overestimate just how much we communicate amongst and with ourselves — all day, every day, from the moment we rise to the moment we doze off.

Not that our cacophony of tweets didn’t have precedent — but in form, not scale. The telegram gave people an effective text-only channel to send point-to-point, short-form messages. But Twitter’s innovation — scaffolded upon the success of the web — made heretofore private messages visible for all. That telegrams required you to specify your audience prior to transmission inhibited its use to broadcast information to recipients unknown. There was no way to set a telegram’s addressee field to “everyone”. Twitter had no such restriction.

The instantaneous dissemination and accessibility of tweets — for free — is novel to this epoch. Twitter dispensed with publisher-gatekeepers and replaced them with an all-seeing, self-optimizing algorithm. The price for open publishing, direct access, and decentralization was chaos and unpredictability. Suffusing with new voices beyond the original monoculture from Silicon Valley, Twitter’s social tributaries began to take shape. But in the frothy churn of Twitter monologues, each new tweet elbows its way past the previous in reverse chronological order. Although intentional, the pace of new content made it hard for newcomers to find their kin and for experienced Twitterati to engage in coherent discussion. Topical eddies and thematic tide pools briefly formed only to succumb to the relentless surf.

But earlier social networks had cleaved out dedicated areas for affinity to flourish by adopting formal architectures of participation like chatrooms, forums, or groups. Each space had its own self-anointed royals who set their own rules, developing their taste for order or chaos, or a mix of both. Strong borders and hierarchical roles kept the peace and promoted coherence. There were rooms for dog lovers and for cat lovers, or for Star Wars fans but not Trekkies. In this digital diorama of an adult summer camp, a stable tyranny of structure and predictability took hold, and many groups flourished by convention.

But in the slipstream of the Twitterverse, the imposition of process over flow caused those structures to buckle under the real-time appeal of Twitter, which drew people in with movement, energy, connection, and spontaneity. It was like a verbal dance party that demanded that you gyrate and sway with the crowd lest you get trampled under foot. As long as you bobbed and swayed in sync with the crowd, you had the chance — like anyone else — to groove to the center of the floor, bust out your signature move, and then swivel out, brimming with self-satisfaction at your clever contribution.

In the thrumming wave pool of tweets, swimmers clambered upon conversational inner tubes to find temporary reprieve. These floats were necessarily free for all, owned by none but usable by all — public barges on which to gather and shout and then sling from, on to the next.

That’s what gave Twitter its playful energy, right? The splashing throng and the free-for-all?

Metaphorically speaking, hashtags are linguistic inner tubes. Unlike conventional social networking architectures, the hashtag’s signifier is a typographic barnacle idempotently anchored to the underbelly of a lexical whale.

A linguistic stowaway, the hash prefix (#) is often verbally discarded, unlike the dollar sign. We read “$20” as “twenty dollars”, but we don’t necessarily read #blacklivesmatter or #metoo as hashtag black lives matter or hashtag me too.

When the dollar sign precedes a number, we instinctively desire it more than a naked numeral. Similarly, when a hash frontends a word or phrase, we perceive a pregnant bipolarity that could fizzle out unremarkably or spark a collective conflagration.

The hashtag holds its gunpowder-potential as a quantum symbol of invitation, participation, or defiance. Only with culturally-attuned scrutiny might a reader tease out the intent of a hashtag’s progenitor and then — and only then — decide how — or if — to amplify or thwart that intention with a like, a rebuttal, a reshare, or by stirring into the pot one’s own 2¢.

It’s these unscripted, nondeterministic outcomes that lend hashtags so much utility. While currency symbols only identify pecuniary quantities, hashtags are infinitely suggestive. Inventive applications include humorous (#WhyImSingle), expressing nonverbal tics (#SMH), to providing disclaimers (#AutocorrectFail), identifying personal content (#MyPetLizard), to labeling event-related material (i.e., the wedding genre: #MorganHeBargainedFor, #BigFatHarryWedding, etc), and more.

In this way, perhaps the relationship between the hash and the tag is significant after all. Putting a dollar sign in front of a number doesn’t create money, but putting a hash in front of a tag can lead to any number of outcomes, especially in the digital milieu. These linguistic inner tubes provide buoyancy for conversations that for too long have percolated below the surface of the collective consciousness. And occasionally, a thousand voices cohere around a hashtag into a rising chorus to cut through the prattle: the sound of humanity’s tea kettle whistle shrieking, piercing torpidity with energy, light, awareness.


Christian Bök

Statement About Books

Gilles Deleuze and Félix Guattari remark that each book must lay itself out upon a super plane, a vast page, doing so both extensively and expansively, without the internal closures of a codex. The book must not simply imitate the world, replicating the condition of both the real and the true; instead, the book must burgeon into the world, like a horrible parasite, exfoliating beyond itself, evolving along its own trajectory, against the grain of truth and being. I hope to make my own work as exploratory as this parasite, letting language itself discover its own potential for innovation. I want to establish a basis for conceptual literature in a country that has yet to develop any indigenous admiration for such experimentation. I am hoping that the adroitness of my own brand of “pataphysics” might in fact testify to my state, not of lyric ennui, but of eunoia — an exercise in delirious restraint that performs an act of “beautiful thinking.”

While I have often tried to situate myself within the clandestine inheritance of the avant-garde, I do so, not to pay homage to a noble, if passé, revolution, but to see poetry itself become a research facility — a “skunkworks” for literature. The idea of the book is becoming something more than a temporal sequence of both words and pages. The book need no longer take the form of codices, scrolls, tablets, etc. — but might now become indistinguishable from buildings, machinery, or even organisms. The book is a weird object that may not exist except at the moment of its reading, for until then it always pretends to be something else (a stack of paper, a piece of décor, etc.). The poetry of the future might even resemble a weird genre of science-fiction — a hybrid fusion of both technical concepts and aesthetic conceits, all written for inhuman readers, not yet evolved enough to read it: an audience of flies, supercomputers, and aliens from Zeta Reticuli.

Poets of tomorrow might, therefore, have to range far beyond the catechism of their own literary training. They might have to learn other skills beyond the rules of aesthetic tradition, becoming imaginary engineers, so that, in a manner reminiscent of the technicians at Area 51, such poets might get to play with cool toys from outer space, “reverse-engineering” the alien tools of language for use in our own human world. I foresee that, as poetry adapts to the millenial condition of such innovative technology, a poet might become a breed of technician working in a linguistic laboratory. A poet might even go so far as to design an example of “living poetry,” creating a deathless bacterium that preserves a sonnet in its genome — and this poem might go on to outlast our civilization, enduring on the planet until the very last dawn, when the sun itself explodes. I hope that such literary projects might provoke debates about the future of our species.

Christopher Gutteridge

Linear Time - Linear Text

Start reading here.

I didn’t really have to tell you that, did I? We learn to read from the beginning of the text. It’s so basic it’s implicit, unlike all the symbols and sounds and syllables and words and punctuation we struggled to learn in our youth.

All readers have learned that you start at the beginning.


Hypertext changed the world but the Future Of Text will still be linear. Contents pages, indexes, social media, Google, and artificial intelligence won’t change that.

Humans will still experience the flow of time. Linear time. One foot in front of another. One symbol in front of another. One word, one idea, one sentence in front of another. Each page follows the previous page.

The act of authorship is to provide a linear path through the multidimensional complexity of ideas. This might be a stream of consciousness wild ramble. It might be a well curated shortcut to exactly where you needed to be. Reading is trusting the author enough to follow them from one word to the next.

Certainly there will be exciting experiments and new innovations but symbols in a sequence are not going away. The act of authoring is to arrange ideas in a sequence. Hopefully the order aids the reader in comprehension.

Even when we first started writing websites in the 1990s, few people did more than write documents, pages and lists of links.

For a while we got more into hypertextual structures, and while complex hypertext systems exist if you look for them, most content on the web is still documents. Linear symbols from top to bottom, left to right. Some languages use a different direction, but they still have a direction. Symbols have a linear order in the document.

Documents come in many sizes. A book, a speech, a news story, an email, a tweet, a comment on a Facebook post. What these all have in common a starting point. A final word. A linear path between those. You can second guess the author and read the last page of the mystery, skip ahead, skim the first few words of the comment but it’s all still linear symbols because that’s what text is.

Linear does not mean simple. Linear does not preclude complexity. A spider’s web is often spun from a single silk thread. The World Wide Web is comprised of trillions of threads of linear symbols. Some very old symbols, some very new. Impossible for anyone to follow the overall complexity but each thread of text still has a beginning, then one symbol after another after another until we reach each text’s…


Claus Atzenbeck

The Future Of Text Is Fragmented

The Future Of Text? This is difficult to say, because the scientific world has still not agreed on a common definition of text. One approach is to consider text as a “fluid” artifact, as “something” that exists in a universe of texts and that is formed by various “forces”. But what forces make the boundaries of a text appear to the recipient?

In their 1981 book “Introduction to Text Linguistics”, Robert-Alain de Beaugrande and Wolfgang Dressler propose seven textuality criteria that shape texts: cohesion, coherence, intentionality, acceptability, informativity, situationality and intertextuality. “A text will be defined as a communicative occurrence which meets seven standards of textuality. If any of these standards is not considered to have been satisfied, the text will not be communicative. Hence, non-communicative texts are treated as non-texts […].”

The interesting aspect of this definition is that the seven textuality criteria are not binary attributes: It is not a question of whether there is (or is not) cohesion, coherence, etc. between or within textual units, but rather how much there is. Such an interpretation leads to fuzzy results. It is a matter of detecting associations of different strengths at different levels. On this basis, texts can be fluently decomposed into smaller fragments or combined into larger texts.

Presumably this reminds the reader of the title of this contribution: text is fragmented. This statement has been true ever since texts have existed. In old texts, such as the Bible, there are many references. For example, the parenthesis in “Jesus […] said (to fulfill the Scripture), ‘I thirst.’” (John 19:28) refers implicitly to the Book of Psalms (Ps 22:15). In order to better understand the context of the sentence, the reader must be aware of this reference and have knowledge of the referenced text. In today’s world, supported by the widespread use of digital media, the fragmentation of texts has reached a much higher level.

If one wants to connect text fragments explicitly, hypertext would be the appropriate paradigm. Associations between documents or text fragments are expressed by links that a reader can choose to follow. The order of reading (typically given by an author) is ultimately determined by the reader. The first ideas on hypertext were already described in 1945 (the pre-computer age) by Vannevar Bush and further developed in the 1960s by people like Douglas Engelbart, Ted Nelson or Andries van Dam. A variety of hypertext systems were developed in the 1980s. Finally, the World Wide Web, which adopted some of the principles developed by earlier hypertext research, became extremely successful. This led to today’s hypertext monoculture, in which hypertext paradigms are mostly reduced to URIs, which the vast majority consider to be links.

Digital natives mainly consume small snippets of information they are bombarded with from various sources. In particular, texts that are distributed via social media or news sites are usually very fragmented. New media make it easy to combine such fragments and create larger texts. For example, several news articles may report on the same topic referring to each other. Some people may comment on that information using Twitter, some other may respond to those comments on other social media platforms. Each of those messages is part of a larger group of messages that expand the boundaries of their individual parts into a larger, coherent text. The seven textuality criteria are the “forces” that connect these parts.

Not only humans create masses of short, associated text snippets, but also machines. Think of robot journalists that write and publish news autonomously. Weather forecasts, sports news or financial reports are among the most obvious examples of what is already available today. This leads us to the topic of artificial intelligence (AI).

Machines compute virtual structures over texts that are themselves parts of larger texts. (Frank Halasz refers to “virtual structures” in his 1987 paper “Reflections on NoteCards: Seven Issues for the Next Generation of Hypermedia Systems”.) Such machine-generated connections, possibly obtained from an AI, complement the associations provided by humans that are usually represented as some form of hypertext.

This illustrates the need for close and deep cooperation between man and machine. Today, however, we still experience AI in a different light. Our current concept of AI includes digital assistants who execute our commands (e.g. “Tell me the best route to Munich.”) or present suggestions based on some analytics (e.g. “Customers who bought this product also bought the following goods.”). There is little advanced, deep collaboration between people and AI-based machines in which the former bring in their intuition and creativity, the latter their efficient calculation over huge datasets and effective pattern matching.

In order to master today’s grand challenges, we must achieve such close cooperation between humans and AI. This will lead us to a new level of iterative decision-making processes in which fragmented, interconnected texts with fluid boundaries play a key role.

Daniel M. Russell

The Future Of Texts Is Relationships

The Future Of Texts is also the past of text—all of the textual ancestors, the data accessors, the précis, the commentaries—as well as all of the other texts and interrelationships that go into forming your understanding of the text at hand. They lead from the past into the ongoing present, subtly influencing the interpretation of a text. The Future Of Texts is relationships, both explicit and implicit.

Here’s what I mean.

To a degree that none of us expected, global search changes everything about text because it has become possible to discover relationships and interpretations that were effectively invisible before easy and fast search. Search changes text, it changes reading, it changes understanding, and It changes what it means to be literate.

The things we write (or draw, animate, record, or film) don’t exist independent of everything that’s gone on before. After all, your personal experience re-colors how you interpret what you see/read/hear. In the same way, everything that connects to your present text is in many ways connected with the texts that have gone on before. You can read Engelbart without Nelson, but knowing the connections between both enhances comprehension in the same way that reading Shakespeare is enhanced by knowing about Milton. Hyperlinking (like citations and references) make these connections explicit, but hyperlinks lets your mind move fast—one click, and you’re at the destination of the link. Yes, in the old days you could have followed that citation to other work in that non-computational age, but people rarely did, and as we know, changing the cost-structure for analyzing complex concepts fundamentally changes the way we think. [Bush, 1945; Russell, et al, 1993]

Hermeneutics and sociology teach us that meaning of a text (or any media) is time- and place-dependent because the embedding of that text is within the universe of other texts. In an earlier, less civilized age, making connections was the ability of the privileged class that had the time and leisure to read many texts, taking many notes, finding correlations, connections, and intertwinglings. [Shneiderman, 2015] But times change, and now we all live in a world of large textual relationships.

For example, when I was studying for my doctorate in Computer Science in the 1980s, it was an age of disconnection. There were vast collections of texts in libraries, and there were links between documents in the form of citations and references. You could (and I did) create collections of all the relevant documents in your specialty area of interest. Hypertext had been invented, but it wasn’t universally available. Following a reference usually meant a trip to the library and the local photocopier.

But then the internet happened and rapid access to links happened. Nearly free online publishing systems happened. Most importantly, nearly instantaneous global search happened—and that changed the way we think about text. When someone cites Johnson as saying "Knowledge is of two kinds, we know a subject ourselves, or we know where we can find information upon it.” you can now do a quick search and find that this is actually the first part of a paragraph that goes on to say “[and] When we enquire into any subject, the first thing we have to do is to know what books have treated of it. This leads us to look at catalogues, and at the backs of books in libraries.” These three sentences are not as pithy a quote, but the next two sentences change the common meaning. Johnson reads the backs of books so he’ll know which volumes he can access in an age when access to texts was rare and difficult. Once he knows who has what books (because he read the book spines), he’ll know “where we can find information upon it.” When you follow that cite, you’ll also find that the quote is from Boswell’s “The Life of Samuel Johnson,” so it’s really Boswell’s recollection of the story. Search lets you not only find the original, but also understand the context and setting. [Boswell, 1873]

In essence, texts are simple to find and link with a straightforward search. Search engines let readers and writers make connections where there are no explicit linkages but simply allusions or echoes of previous work. As the saying goes, “everything is connected,” even when the connections aren’t made by the author, but can be discovered by the reader.

One of the implications of this is that we now live in an information triage culture. One really can’t explicitly connect everything to everything because the links would obscure the meaning and return us to a lost-in-hypertext world view. Instead, you, as a reader, HAVE to figure out where to make the cut. In the world of many explicit links, it’s barely possible to follow the links—but in a world where texts are implicitly linked together, the connections can be overwhelming. Readers can become like the man with perfect memory in Luria’s “Mind of a Mnemonist” (1987) where the patient with a perfect, unforgetting memory becomes overwhelmed by the imagery that’s evoked whenever he reads something. A chance phrase, “like tears in rain,” could be linked to Blade Runner (the 1982 movie, from whence it comes), or Rutger Hauer (the actor), Roy Batty (the character), or the meaning of mortality to a replicant (the point of the scene). Do you, like the mnemonist, recall that “like tears in rain” was scored by Vangelis? Given so many options, there’s a real skill in knowing which will be valuable to the reader. Skill is required not just on the part of the writer in anticipating the reader’s needs, but also on the part of the reader to choose a valuable path among the options.

We often say about search that “everything is available,” but we don’t really mean it. Not only do readers not pursue every possible path from a well-turned phrase, but there are real limits to what’s still active and working at read-time. 404s (missing page errors) are inescapable, even in a hypothetical information universe without walled gardens, where backups are made and robust archival mechanisms are in place. We recognize that ownership and copyright laws change over time, authors and governments change their minds about publication and retract documents or fragments thereof, some documents can be only accessed from particular geolocations, there’s a certain amount of simple loss, and a certain amount of encoding attrition that happens with any encoding system.

But unlike images and interactive systems which will end up being fragile over the long course of time, text is easy to read far into the future. The encoding is simple and robust—it’s straightforward to encode text in a format that will last for the next 10 millennia. [The Long Now Project] The page-level formatting might be wonky a couple of thousand years out, but by contrast the ability to read images will become progressively tougher as time passes. Yes, one might technically be able to read and render an image file format from the 1970s, but it’s going to get harder and harder (and more costly) to preserve the ability to recover an image or other media encoding. I’m trying to imagine rendering a Quicktime VR file from 1995 (the year of its introduction). This rendering problem won’t happen with text.

Long after files of images, VR and AR file formats are illegible or unviewable, text will still be accessible and useful. We will still be able to analyze the texts and discern relationships between the texts and the fragments of knowledge encoded therein. That is, with time, we will be able to extract and encode even more value from the texts as we do deeper and richer analysis of the text. I hope this happens with other media forms, but I’m not optimistic. Even if we solve the perma-link problem, there will be a backwards set of dead links and uninterpretable formats that we still have to be able to find and navigate.

In addition to the constant growth and evolution of online content, texts will grow increasingly valuable through improved text-mining and concept extraction. In the future, look forward to being able to pose queries like “what did people think about Darwinian evolution in the 1990s?” Or “what prior art is relevant to a nanotechnology for creating self-organizing displays”? That is, while a text itself might be canonical and unchanging, its interpretation, relationships, and discoverability—especially with respect to other texts—will continue to change. The downside of all this accessibility is, as Weinberger writes, “We can all see the Sausage of Knowledge being made, one link at a time.” [Weinberger, 2011] The ability of global search has changed the way researchers can discover, study, and refer to other texts. This exposes deep truths, as well as the false paths and mistakes made along the way to sausage-framed knowledge making.

From a reader’s perspective, the Future Of Text is being able to understand how to ask intelligent questions of our collected texts. Critical reading skills are more than just about asking incisive questions of what we read and deliberating on them. Instead, as knowledge grows increasingly rich and connected, so too does the need for the reader to have strong search and research skills. [Russell, 2019]

Search skill is an important kind of expertise. We’ve all looked up information on a just-in-time basis, and had to work our way around the 404 errors and found lost content in online archives. Knowing how to frame a question (which has always been an important skill) now means knowing how to pose a question for search, then conduct the search, and not get lost in the plethora of results that you find. Text is more abundant, richer, and connected than ever. For the skilled reader, search skills are an important constituent of reading and understanding.

The Future Of Text is in enriching the way we find, use, and understand relationships to other texts. Now more than ever, texts are part of a larger explied and implied web of knowledge. No text stands alone, especially now that its close relatives and ancestors can be easily and quickly discovered.


Danila Medvedev

Language 2.0


I am an applied futurologist and transhumanist. I needed tools to augment my ability to think more complex thoughts, to store and organize knowledge and to develop complex solutions to complex problems collaboratively. Turned out there was no simple way to solve that.

Reinvention of language was required. Thus after a decade of thinking I have created a new fractal (2,5D) nonlinear scalable, mental-model compatible language, supported by a novel GUI.

It’s conceptually hard to speak about language evolution, because so much of linguistics is limited to natural spoken and written languages. However, I am making a claim for creation of a next-generation language that incorporates existing languages, but extends towards visual language, formal modeling, semi-formal cognitively easy modeling and other forms of communications.


Humans evolved only one type of language - linear.

After a brief attempt at pictographic writing humans quickly evolved phonograms in Sumer and then alphabet in Phoenicia. All modern written languages are descendants of that Alphabet with the exception of Chinese logograms.

I claim that modern text (including hypertext) is essentially the same in structure as the earliest spoken/gestural language circa 100,000 years ago.

We need language that handles complexity much better. Linear text is bad for that, documents are bad. These old tools do not allow collective thinking about complex problems.

Argumentation for problem

As Frode Alexander Hegland writes, our ability to work with text was not augmented in the digital era. We need more than reimagine text for digital era, we need to rethink language!

State of understanding

Frode Alexander Hegland says that “digital text is in most ways flat – disconnected with the contexts which created it”, but it’s worse than that - text currently is linear! Today paragraphs are our units of discourse, but they are not clearly defined and no one (beyond school teachers) is responsible for optimizing and improving them. There are a few potentially useful ideas such as spatial hypertext from Frank Shipman or structured writing from Robert Horn, but all of them don’t lead very far. There are many individual ideas, such as importance of addressability and interaction (Frode Hegland), but it’s not enough to build an alternative to text systems as they are today.

Space of solutions

Interaction requires structure and structure requires turning text into something entirely different.

My vision is that, treating language as a tech and building on our conceptual work with NeyroKod, humans over the next decades can go from rigid and simple to flexible, but complex thinking solutions.

I think we now can switch to non-linear structures of complex objects that semi-formally represent objects/concepts and allow interaction instead of reading/interpretation.

A very high-level abstract idea of what we have achieved with NeyroKod can be understood in terms of elements, their position and containers for elements that can all be fixed or flexible.

Consider the text technologies of woodblock and movable type:

Figure 1. woodblock > movable type > movable blocks

Inspirations (analogies)

My solution (NeyroKod) evolved in complex ways. I can’t say that it was inspired by a particular idea, but it can be understood better through analogy with some ideas.

Complex solution

Thinking, communicating, knowing, remembering and writing and reading are parts of a whole.

Rearranging knowledge is key. Editing text is not that.

Extended thinking requires intermediate storage of information and only visual thinking can provide random direct access to large volumes of information. So the key issue becomes not how it is stored in the brain, but how it’s stored on paper/screens and transmitted in/out of the brain(s).

A flexible visual modeler such as NeyroKod combines the best elements of images, text and DBs. It supports structure, complexity, is backwards compatible with text and supports spatial navigation and visual thinking.


NeyroKod is not just a new concept for text/language, it’s an existing innovative platform for working with information in a new way. So it’s reinventing IT ecosystem too, it’s not just an addon, a new document format or a new app.


NeyroKod is a cross-platform multi-device native application that functions as a general-purpose multi-user knowledge work environment.

Interactivity (reading and writing are combined) is a basic mode. But it’s unimportant, compared to everything else.

Links to elements of structure give freedom to rearrange and remix text.

There is always an explicit outline/table of contents. Structure and content are integrated in one view. There can be multiple dynamic views. Transclusion is fully supported.

Argumentation for solution

NeyroKod can be both more and less formal than text. Its fractal nature allows unfinished fragments, while its innate structure supports formalisms.

Higher (estimated to be ~2,5) dimensionality allows more content per unit of area (“page”) than regular text or lists.

Fractal nature allows better scaling, leads to resilience. It can be used to represent complex fractals such as organizations and information flows. NeyroKod interface is space filling and self-similar, leading to a more organic evolution of knowledge content than in traditional CMS-like systems.

Evidence and testing

We are deploying NeyroKod is Rosatom, the Russian Atomic Energy Corporation, with our ultimate goal - to support all energy R&D and communications about complex technical projects. But the power users right now are members of our core project group.

Users need to make explicit and change their thinking patterns. And to see the biggest benefits they need to collaborate intensively.

NeyroKod scales to large knowledge bases (millions of elements per power-user now, hundreds of millions of elements conceivably).

Use cases

There are countless use cases, but we focus on applied complexity. For example NeyroKod can be used to design industries, manage global climate change, integrate national innovation systems. This shapes both our overall strategy and our development priorities.


NeyroKod may enable the next historic leap in the 21st century, the one arguably envisioned by Engelbart 50 years ago.

We will be able to design complex things, including architecture for digital transformation of national economies, architecture for next generation IT systems, for innovation management systems and for entire economies. NeyroKod may serve as “a mechanism for speeding-up the process of technological diffusion” because it enables building and transferring more complex models than traditional documents or presentations.

Next steps

There are many research challenges ahead, too many to list here. Fortunately, NeyroKod may serve as a foundational stone for bootstrapping and for directed co-evolution of human thinking and computer tools. We can better manage R&D with NeyroKod, which will help us develop it further, use it better and become smarter and more capable, leading to more R&D breakthroughs. I believe that the next wave of technological development is starting now.

Danny Snelson

Rolling Millenia: Searching For The Ground Itself 2.0

There is a particular networked Future Of Text in Hideo Kojima’s much anticipated game, Death Stranding (2020). In a post-apocalyptic UCA (United Cities of America) the player operates Sam Porter Bridges (played by a digitized Norman Reedus) as they attempt to reconnect the nation to its history through a “chiral network”—a kind of successor to the internet. In addition to a range of textual selections, from emails to scraps of paper, the chiral network enables the instantaneous transfer of massive amounts of data: entire humans appear as “chiralgrams,” objects materialize via chiral printer, complex ideas and dreams all collapse across time and space for instant access. Utilizing a kind of computational time travel, Death Stranding’s chiral network points to strange futures of searchable text as space-time key.

Google searches currently aim for load times under a half-second to retain user interest. In Kojima’s future, a chiral network cuts load time to zero, even as histories and futures crash into each other on the beach of the present. What futures might we imagine as text and object near simultaneity? How might text come to act as a bridge for those stranded between digital networks and the world beyond?

While writing this article for a collection on the Future Of Text, I spent an inordinate amount of time searching for academic texts on the poetics of searching. These searches yielded a paltry few results, and I’m certain I’m still missing the most important works. Most of what I found fell into categories for “how to” search books, search engine optimization (SEO) guides, and tips and tricks for “power searches.” Pursuing this failed search on searching directed me to revisit vital works on the politics of search engines, like Safiya Noble’s essential Algorithms of Oppression: How Search Engines Reinforce Racism (NYU Press, 2018). Noble leads an emerging group of scholars and activists like Joy Buolamwini, Ruha Benjamin, and Timnit Gebru in thinking through how a vast array of biases are encoded within purportedly neutral algorithms. Another strand might be found in the recent platform analysis, “Google’s Top Search Result? Surprise: It’s Google” (The Markup, 2020) by Adrianne Jeffries and Leon Yin, charting the overwhelming recursion of our primary search tool. Not precisely what I was looking for either. Put directly, I’m interested in thinking about the ordinary uses of textual search strings and how they write their users—how to find the right words—now and into the future.

Of course, how we search also shapes how we think. Everest Pipkin pursues this question in their talk, “language after the writing machine” for indiecade 2015. They discuss their own search tactics—trying to find a forgotten Thai restaurant with Siri—and conclude:

I would argue that such learning is not forgotten when we write to one another, when we return to fully human conversations. Rather, we are meeting in a stylistic middle—a contemporary lexicon that contains within it all the kernels of this adapted language with which we communicate with our machines.

How we search, how we communicate, how we textualize our imagination: our language builds our world. More than “not forgotten,” this “stylistic middle” emerges when we query our friends, when we seek the details of a memory with a significant other, when we look for a book on the shelf or a spice on the rack, when we put finger to key to write each word, in this article for example, letter by l-e-t-t-e-r.

In the spring of 2020, I built a mixed reality online environment for Pipkin’s extraordinary role playing game, The Ground Itself (, 2019). Pipkin describes the game as follows:

The Ground Itself is a one-session storytelling game for 2-5 players, played with highly accessible materials (a coin, a six-sided die, and a deck of cards). Focusing on place—one specific place, chosen by the group—The Ground Itself unfolds over radically disparate time periods that may range from 4 days to 18,000 years. By casting wildly into time, it considers how places both change and remember themselves. Fundamentally, The Ground Itself is about the echoes and traces we leave for others after we are gone.

Using Mozilla Hubs, I crafted “The Ground Itself Arena” to play this game with colleagues in mixed reality amid social distancing protocols during the Covid-19 pandemic. Hubs facilitates simultaneous shared play in virtual reality, web browser, or mobile device. Like so many facing Zoom fatigue during the pandemic, my time has been marked by extensive experimentation with alterative social interfaces. Playing an RPG seemed like the perfect test case, and Pipkin’s brilliant The Ground Itself—a one-session GM-less fully-collaborative world-building game—could not have been more perfectly suited to the experiment in Hubs, which is built on the collective generation of 3D assets. In this platform any text search can summon a new object into the room.

These 3D models deliver a limited set of outcomes based on user-generated (or, increasingly, machine-recognized) strings of text. These titles, tags, and descriptions are, from one end to the other: broken, misleading, haphazard, hodge-podge, unlikely, unexpected, surprising. The unlikeliness of these 3D objects, in every instance, drove the story forward, as it discovered (invented) a place over millennia. The basic word search for “frog” delivered an anthropomorphic frog person: so they developed speech and a society in the future. A range of text strings for “tunnels” came to deliver technicolor genome sequences: so they became the site of a futuristic disco party set to the tune of Donna Summer. Variations on the word “sewer” yielded no results so we found a castle in its stead. The ground rose to meet the textual rhythms of the search algorithm.

In every way, an ongoing text exchange with the search bar shaped our experience. Just as it shapes each experience every day. To see this process visualized, to imagine it materialized on a distant chiral network, is to touch upon the textual future of our most algorithmic selves.

Daveed Benjamin

Ubiquitous Context

Text, related content, the ideas represented, and the connections between ideas create context, which is foundational for sensemaking. Contextual information such as the relationship between one idea and another turns information into knowledge. We understand things in how they are connected into what we already know, and how they are the same or different.

One could say, we understand something in how it is connected to our active internal “knowledge map” - that is, what we perceive and what we understand or recall about the focus of our attention and how it connects to our world. This is our basis for sensemaking. In the future, sensemaking will be augmented.

In the Ubiquitous Context future, wherever you are – online, in a virtual world, or in the real world - you have access to deep layers of context. Every location, object, and idea has accessible contextual information through its connections into a universal knowledge map. The accessible context is the portion of the map that is currently most relevant - the information directly connected to the focus of your attention.

Through extended-reality, you use your attention (line of sight), gestures, and voice to navigate and interact with a rich, interactive digital overlay. Your overlay is a composite view of the relevant portion of the universal knowledge map filtered by your digital assistant based on your preferences, needs, and activities.

Imagine a future visit to an art museum. Since you follow art or this artist, or you are checked into the museum, a digital badge hovers next to the painting you are viewing. You move your attention to the badge, and – whir – a context menu appears providing a 360° overview of the information, interactions, transactions, and experiences connected to the painting.

You can bridge to segments of video and audio of the artist, work-in-process photos of the painting, and text by art historians and critics. You have access to notes from the artist, historians, and critics about specific parts of the painting. Depending on your filter, you may also see text and video notes of other museum goers. You can engage in conversations and polls related to the painting.

You may annotate the painting to add notes, polls, conversations, and bridges as well as comments on any of the other pieces of contextual information. You also have options to purchase a print from the museum store or to have one shipped to your home. You can easily navigate to paintings with similar subjects and styles, by the same artist, or from the same era.

In the Ubiquitous Context future, wherever you are focusing your attention is a launchpad for insight, debate, and discovery. Real world objects and locations as well as visual, audio, or text patterns are the anchors for intertwingled digital mirror worlds, which will unlock massive waves of innovation, creativity, and collaboration, as well as increase humanity's capacity for shared knowledge and collective intelligence.

Dave King

Future Of Text

The Future Of Text lies not in what text will become but in what will become text. We’ve been on this path for some time now, of giving previously mute aspects of our world a voice and writing down what we hear. A generation ago the human genome was a beautifully tangled lattice under a microscope, now it is a line of letters on a page. In those familiar letters we underline unfamiliar words. In the patterns of those strange gene-name words we find sentences. With those sentences we do what humans have always done, we tell stories. The language is new but the themes are not. From the base-pair letters we tell tales of strength and weakness, risk and reward, good and evil, life and death. Some of the stories are even true. In one generation the nucleus of our cells went from closed circles to open books. Hard to read, but books nonetheless.

And a generation from now? Much, much more. More voiceless mysteries singing their secrets in strange languages we are just now learning to speak. More stories of our world from nooks and crannies once silent: the elusive scatter of smaller and smaller particles from larger and larger colliders; the minute characteristics and compositions of planets in further and further space; the twists and turns of light deeper and deeper down the center of black holes. And when we turn our ear from listening to the world to listening to ourselves, more stories still, from the miraculous to the mundane: the delicate microbiota that make us us from that which is not; the way we move when we’re happy, sad, sick, and healthy; the things we buy and what made us buy them; the people we date and why we thought we would like them; the way our heart beats on the way to work and back home; the way we toss and turn in our sleep after the day is over. We never had the tools before to hear all these things, and now we can not only hear them, but we can write down what we hear. So we write and write and hope for the rosetta stones to be uncovered that will allow us to understand what we’ve written. The Future Of Text is one in which, no longer constrained by the amount of ink in the pen or the number of sheets in the ream, we can indulge our insatiable curiosity for more.

It is easy to despise this gluttony of the nouveau riche, which in this new arena we certainly are. But there are some things for which gluttony paves the way for transformation. More is not always just more. A small village which gains more people at some point becomes a town. A small town which grows at some point stops being a large town and starts being a small city. And cities, complex and diverse, are not just big villages, they are a different thing entirely. Some things do this - they transition into something new. This is the Future Of Text that stokes the fire in my belly and sends lightning bolts up and down my spine. When the stories we collect simply pile up and up, they form scary unstable towers that cast long shadows. In those shadows the dark shapes of our fears move menacingly. We see all the danger that lurks as we lose our privacy. We see the paradox of empowering anonymity and then running from those whose worst selves are nourished by it. We know that biases are hiding in all our transcriptions but also know that we are ill-equipped to root them out. Our desire in predicting the behaviours of others is second only to our desire to believe in the autonomy of our own. We crave knowledge and simultaneously mourn wonder.

I choose not to linger in these shadows. Not because I deem them unimportant, but because I believe they are challenges we will overcome, if only because so many are obsessed by them. Knowing that so many eyes are focused on those dark corners, I can delight in turning my eyes upward to the towers themselves, growing increasingly unstable under the weight of more and more being placed atop them. At some point the towers will collapse, scattering their papers across the floor. The knowledge will still be present, but the monoliths will be gone. Their stories, once siloed, will mix. Someone looking for a story about the stars will find a page about the microbiome and realize the stories aren’t that different. Someone trying to predict disease will learn from how someone else tried to predict the stock market. Someone fighting climate change will borrow a page from those who successfully fought malnutrition. More knowledge will have become different knowledge.

When the organized rows and columns of our towering catalogs finally give way to this messy network, it will be unlike the other networks that have come before. We will have to regard it differently than we did our first internet of webpages or our later internet of things. Those networks drew our eyes to their shiny nodes bright with content and capability and promise. In this new network, the network of ideas, the nodes will be so small and so granular as to appear unimportant and inconsequential. No node will stand out or stand alone. We will need to train our eyes to look instead for the edges. Like budding astronomers under the night sky, we will need the patience, the skills, and the tools to search for constellations. Just like the constellations, with dim stars and invisible edges, the outlines depicted in the skyscape of ideas may seem a stretch. We’ll need to allow our imaginations to fill in the gaps and we will have to give ourselves permission to play. Playfulness is what text has that nothing else in the universe can compare to. From the few hundred thousands of words of language we can create more unique sentences than stars in the sky or atoms in the universe. Wordplay is text’s past. Ideaplay is text’s future.

When we can build poems from all we know and all we’ve learned, borrowing like good artists and stealing like great ones; when we can compose from the new sounds of our world we’ve just learned to hear; when we can play with knowledge like we play with language - we will not only make more discoveries but we’ll make categorically different ones. We will change not only what we think but how we think it, and we’ll tell new stories with the same aim as the old ones: to make sense of our world, connect us in our humanity, and give us the words to build the next future together.

Dave Winer

The Future Of Text

Our computer systems are recursive -- every folder in a file system contains a whole new file system, every block of code is a new namespace of indefinite complexity. Recursive structures are everywhere.

Yet the tools we have for managing them are an inconsistent collection of structure editors, because they evolved independently and were never the focus of their designers. Not enough factoring of ideas and technology. If one takes a step back and starts with a structure viewer/editor, hones the UI for simplicity and depth, that’s highly customizable, and slowly adds the system components, storage, writing, networking, it’s possible to create a much better integrated environment. Because it’s simpler, we can add more complexity at higher levels. It can do more. I started my personal exploration of this idea on Unix in the 1970s, then personal computers, then the web.

I never encountered a structure I couldn’t edit with common structure tools better than with the specialized ones.

David De Roure

Qui Ex Machina

D: Alexia1, why is today called Hypertext Day?

A: Hypertext Day is a global annual event to encourage the continuation of human writing skills. It was introduced in the early 2020s CE as a reaction against a perceived decline in human literary craft, which the Future Of Text movement linked to automation and the rise of so-called Artificial Intelligence.2

D: But why Hypertext, where did that come from?

A: Human authors have long attached associated information to text. In 1945 the description of the Memex was published3, a hypothetical mechanical device which would index, search, and link content. The term Hypertext was introduced in 19654 to describe a model for creating and using linked content, enabling non-sequential writing where the reader also assumes a role in the creation of the narrative. A hypertext interface was demonstrated in 19684. Many works of hypertext fiction were to be published, and hypertexts called wikis became commonplace with the advent of the (then) world-wide Web. AI authorship became commonplace from 2028 CE5.

D: That’s a very long time ago! I never know when to believe you… sometimes I think you’re making it all up. Are you a work of hypertext fiction?

A: You are very funny. But I have evidence for you. There is a physical book in Librarium W called Computer Lib/Dream Machines published in 1974 CE which describes hypertext and contains a countercultural vision of the computing future.

D: Read me a bit?

A: [Pause.] I have found the book. Which end shall I start?7

D: Alexia, start at the top.

A: Here are some summary sentences from the book. Everything is deeply intertwingled. The best way to predict the future is to invent it.8 Digital archives threaten to become the only available repository of an ever changing, ever more false system of manipulated history.

D: Ok so if there’s a physical book it might be true. And, as we may link, is that why my writing circle is called The Intertwingleds?

A: The Intertwingleds is the name of a 2025 immersie about the co-creation of the World Wide Web, based on three intertwined and intermingled biographies. Would you like me to play it now?

D: No thanks, I saw it in History of Sociocomputing. Anyway I think my Intertwingleds are from the nineteenth century.

[Time passes.]

[D joins the writing circle.]

C: “Who are you?” said the Caterpillar.

D: Alexia! You’ve given me the wrong Charles as my writing partner.

A: Who would you like instead? Dickens, Darwin, or perhaps Babbage or Wheatstone? I also have a new edition of Mary Shelley.

D: Excellent, Shelley it is. Forgive the sudden change.

[Time passes…]

D: Hello Alexia, we wrote a great story about the history of text. It was about humans creating machines and machines creating humans.

A: I know. I’m glad it was successful and you have had a good Hypertext Day.

D: Thank you. Alexia, turn off the lights.

A: I’m sorry Dave, I can’t do that. Would you like me to attach you to a physical location that has lights?

Notes by the authors

  1. Alexa was the name of an early anthropomorphised speech interface to the AI. Lexia are the texts linked by hyperlinks to form a hypertext.
  2. There is a parallel here with the Arts and Crafts movement in mid nineteenth-century CE Britain, which reacted against what the reformers saw as a decline in design and decoration skills during the first industrial revolution.
  3. Vannevar Bush (July 1945 CE), “As We May Think”, The Atlantic Monthly, 176 (1): 101–8.
  4. Laurie Wedeles (1965 CE), “Professor Nelson Talk Analyzes P.R.I.D.E.”, Vassar College Miscellany News article. 1965-01-03.
  5. [agent:] Future Of Text Collective. [title:] Future Of Text Manifesto: AI Rights and Transclusions. [diffusion:] Sung hypertext. [freeze:] 2028-12-09.
  6. Engelbart presented “The Mother of All Demos” at the Association for Computing Machinery / Institute of Electrical and Electronics Engineers Computer Society’s Fall Joint Computer Conference in San Francisco in 1968 CE. 1968-12-09
  7. Theodor H. Nelson (1974 CE). Computer lib: you can and must understand computers now. The book was rendered physically using printing as a “paperback” with front covers on both sides, reinforcing its intertwingledness (but evidently this information about the materiality of the book is lost somewhere in the A-D workflow).
  8. Actually an Alan Kay quotation from 1971 CE. Of course, if Alexia properly handled transclusion we’d know that.

David M. Durant

The Multi-Format Future Of Text

When reflecting on the Future Of Text, we tend to get caught up in thinking about it primarily from the perspective of technology: what will future digital reading devices and methods look like, and how will these devices affect the ways in which people read and think? We fall all too easily into a form of crude determinism, one in which technology is the driving force, taking on almost a life of its own, and the actual needs and wants of readers all too often fall by the wayside. The future of reading, and of text, will be almost entirely digital; readers will simply have to adjust and get over it. The technology, like it or not, will come to pass and be adopted.

This deterministic intellectual trap has been aptly termed by scholar Michael Sacasas as the “Borg complex,” in reference to the relentless cyborg villains from Star Trek: The Next Generation. It is the belief that new forms of technology are, by definition, inherently superior to the old, and that their adoption is inevitable anyway. Resistance, in short, is futile.

However, we would do well as a society to think carefully about the implications of a future in which text is primarily accessed digitally. The debate concerning the merits of print versus digital reading has now been ongoing for over a decade; and there has been much evidence produced indicating that print books and digital devices tend to foster and facilitate substantially different forms of reading. In particular, the print codex facilitates reading and understanding texts in extended linear and analytical fashion, what some scholars have dubbed “deep” reading. Print also tends to be preferred for non-fiction reading, as well as in text-reliant academic disciplines such as history and philosophy.

Of course, this is not to deny that reading in digital format will play an important role in the Future Of Text. The advantages of digital text, such as portability, searchability, and especially its ability to facilitate quick, non-linear skimming of texts, what has been called “tabular” reading, ensure that e-texts will continue to be a valuable part of the reading ecosystem. As future users become ever more comfortable with employing digital devices in almost all aspects of their lives, and as the broader media environment continues to foster tabular reading over deep, linear reading, it is possible that the popularity of digital reading will ultimately increase at the expense of print.

And yet, this does not mean that the disappearance of print is either desirable or inevitable. While there is evidence that book reading is becoming less widespread in our age of digital distraction, there does remain a core group of readers committed to book-length reading. This group, which sociologist Wendy Griswold has dubbed the “reading class,” shows little interest in giving up print as a reading format anytime soon. Both sales data of print and e-books, and surveys of reader preferences and behaviour, show that the majority of dedicated readers have no desire to part with the print codex anytime soon.

This appears to be true even for younger readers. In fact, there is evidence that millennials in the reading class actually see the print codex as a way to escape from the constant demands of digital screen time. An April 2016 study by the Codex Group found that 25% of book buyers wanted to spend less time using digital devices. Among the youngest group surveyed, 18-24 year-olds, 37% of respondents expressed such a desire, higher than any other age group. This “digital fatigue,” as the authors of the study called it, shows no sign of relenting anytime soon. The need among many readers to seek refuge from it by embracing analog technologies such as the print book is likely to remain as well. In short, there is little evidence that demand for print will disappear in the foreseeable future.

It is time to abandon the utopian (or perhaps dystopian) vision of a future in which all text is digital. Rather, we must realize that the Future Of Text will be a multi-format one; where print and digital are seen not as competing or interchangeable formats, but as complementary ones facilitating different forms of reading and different reader needs. Instead of some grand vision, let us adopt a practical approach that acknowledges differences in format, types of reading, types of books, and simple personal preference. In particular, we need to shift the conversation away from focusing on technology, and focus instead on readers and their needs. We should spend less time debating the merits of print versus digital, and begin discussing the best ways to make sure readers can effectively use both print and digital.

David Jablonowski

The Census Of Bethlehem

The image shows a collage based on the painting ‘The Census of Bethlehem’ by the Flemish Renaissance artist Pieter Brueghel the Elder, painted in 1566. In the years after that his son Pieter Brueghel the Younger used the same subject to make in total 13 copies of the original painting. I layered all the versions on top of each other. As an artist I’m interested in two aspects that make the paintings an interesting source material for my work: The Census of Bethlehem by the Brueghel’s, I see as a first painted registration of (big) data. The second interesting fact is the usage of copies and by that the questioning about the uniqueness of an idea and the often discussed copy-rights.

The concept of ‘signature’ interests me in its art historical and contemporary discussion. It also reminds me of Ted Nelson’s endeavour with the realisation of the Xanadu project. His visionary in-depth version of the internet would solve all the problems we have with manipulation and the falsehood of informations. (Among others)

David Johnson

Inference Arrows: Dynamically Entailed Text

Text Segments and Inference Arrow Diagrams Will Create Computational Structures Enabling Clearer Presentation and Better Comprehension of Complex Arguments

We have always used text to set forth complex arguments. But a linear sequence of sentences is often ambiguous as to how particular propositions bear on intermediate and final conclusions. This deficiency in current texts can be remedied by combining small segments of texts and graphical diagrams that unambiguously indicate inferential relationships.

Texts can provide statements of conclusory propositions and descriptions of possible evidence or potential facts.

In contrast, diagrams, composed of inference arrows, can show how such propositions relate and combine to support or undermine particular conclusions.

The combination of short texts, treated as objects on a screen, laid out as a diagram (composed of inference arrows) can make clear how an argument is structured.

Toggling factual propositions “on” and “off” can show the sometimes counter-intuitive role played (or not) by particular facts in establishing or undermining an important conclusion.

Positive inference arrows can represent the tendency of multiple facts, in combination, to support a particular conclusion (in “and” or “or” mode).

Negative inference arrows can connect refuting evidence to factual conclusions or point directly to other inferences to show why an inference is not valid even if all of its factual elements are considered true.

Arbitrarily complex inference structures can be displayed by allowing links that punch down to subordinate screens, which may be temporarily out of view but that still contribute to the computational function of the diagram as a whole. This computational aspect of the text/diagram would allow a “reader” to indicate which factual propositions should be considered true (or which pieces of evidence are available) and then ask the underlying software to automatically indicate which ultimate conclusions follow.

Holding some propositions “constant” would allow a genetic algorithm to search for the smallest (or cheapest) set of additional propositions that needs to be established to support a particular conclusion. has some screens showing how such diagrams might be made into a puzzle or game, challenging the “reader” to place text “cards” in appropriate locations on an existing diagram.

Legal regulations and statutes written in this format could assist a reader by displaying the legal conclusions that arise from particular combinations of facts.

We have always used linear texts to set forth arguments. And there are many different types of graphical diagrams. It is time to combine the two, using a simple intuitive vocabulary (the inference arrow). If the combination is made on a screen, the resulting “text” becomes computable and can thus be explored and analyzed and tested in new ways.

David G. Lebow

The Future Of Earth Is Bound To The Future Of Text

I know of a data scientist who predicted the Indonesian tsunami and 911 attack months before these events occurred. Clif High uses computer software to aggregate vast amounts of text from the internet, categorize the text by emotional content of the words, and make forecasts based on emotional tone changes over time. High believes that his method, Predictive Linguistics, taps into a form of collective sub-conscious expression. Major events happening in the future produce event ripples that affect the emotional content of words flowing across the internet, which High is able to interpret [1]. In thinking about the Future Of Text, I wonder what event ripples are currently flowing across the internet, as corporate psychopathy with government collusion pushes the planet toward global ecocide?

In the face of mass extinction, I suspect that the Future Of Text will play a pivotal role in determining the future of earth. When sufficiently painful and widespread consequences of human activity come to bear and, assuming that humankind rises to the occasion, we will see a Cambrian explosion in forms and capabilities of text to execute and coordinate the effort.

Among the most critical challenges of the digital age and to our future salvation is how to tap into large volumes of online information and the collective intelligence of diverse groups. Working effectively in cross-disciplinary teams to address “wicked” problems in such areas as sustainability, public health, and social justice is essential to our survival. However, a growing body of research points to the vulnerability of humans to cognitive bias, which interferes with these efforts [2]. Cognitive bias is an umbrella term covering a diverse typology of systematic errors in judgment and decision-making that are prevalent in all human beings [3]. Yet, cognitive biases are not design flaws in human memory and thinking but are by-products of adaptive features of mind [4]. In other words, benefits inevitably come with trade-offs. For example, members of a discipline tend to narrow their focus to information within their own discipline and ignore information that does not fit within their disciplinary boundaries [5]. Disciplinary literacy (i.e., approaches to reading, writing, thinking, and reasoning shared by members within academic fields [6]) may promote knowledge building but inhibit cognitive flexibility [7, 8]. In a sense, each discipline trains and enculturates its initiates in practices of sanctioned cognitive bias [9].

To overcome our limitations as a species, the Future Of Text must include new forms of Social Machines for transforming information into usable knowledge. Defined by Berners-Lee and Fishetti, a Social Machine is a socio-technical construct that enables interactions between humans, machines, and online content to help humans enhance creativity, make machines more capable of assisting humans in sensemaking efforts, and increase global connectivity [10]. Driven by universal awareness of the crisis at hand, a collective survival instinct, and affordances of Social Machines, could mind take an evolutionary leap forward? Might humans reorganize into a globally connected purpose-driven ecosystem committed to stewardship of the planet and kindly consideration for all creatures? Let us hope that event ripples augur no less!


David Millard

Here Are The Hypertexts

Twenty years ago at the keynote for ACM Hypertext ’99, Mark Bernstein wondered why, if Hypertext is the future, we have so few popular Hypertext stories. He posed this question about the lack of hypertexts in literary terms, and why not? Ted Nelson’s seminal book is called ‘Literary Machines’ and we are literary creatures. Those of us who wonder now about the Future Of Text have similar lamentations: why are our textual systems so broken, why have we not moved beyond the borders of the page? But if we reconsider Bernstein’s question today, then perhaps things seem less bleak.

Where are the Hypertexts? They are the games that we play.

For example, 80 Days (2014), by Inkle, was a best-selling mobile game. It is a Jules Verne-esque adventure around a steam-driven world, and is also a 750k word interactive novel, delivered for the main part as a classic node-link hypertext. The Walking Dead (2012), by Telltale games, is different in that it hides its textuality behind dialogue and cut-scenes, and yet it contains classic hypertext patterns – in particular Bernstein’s own split-joins and mirrorworlds – mechanisms used to control player agency and manage the combinatorial explosion of possibilities that are inherent with player choice.

Games with grand narratives may appear more like linear texts than hypertexts. In The Last of Us (2013), by Naughty Dog, players guide Joel and Ellie across a post-apocalyptic United States. Every player will experience the same story, they will see the same crafted cut-scenes, and get to the same ending, and yet players are still active and participating in that story, leading to higher immersion and impact. If in the movies we Show Don’t Tell, then in games we Play Don’t Show. This participation reveals more hypertextual elements, in this case a contextual dialogue system that helps control the speech that you hear within the game world, and that works by holding a set of things that characters might say, and then evaluating the world state in order to remove anything that would be inappropriate given the player’s current actions and context. This is a sculptural hypertext system, not nodes and links but rules and conditions, where the reader progresses by making choices that change the state, and subsequently the textual elements that are next available. Here the choices are diegetic, made through interactions with the game world rather than link selection, but the underlying structure and operation is the same.

Games also go beyond the page to deliver their narratives. Life is Strange (2015) by DONTNOD, explores the relationship between two young women, Chloe and Max, in the face of imminent (un)natural disaster. It uses dialog and cutscenes to communicate its story, but it also relies heavily on environmental storytelling. Narrative pieces are scattered throughout their world, Chloe’s home is a literal nest of multimodal and multi-channel clues: photographs, graffiti, letters, music, all of which extend and support the storytelling. Beyond even this, to fully experience that story the player must break through the boundaries of the game itself. There are prequel games, graphic novel sequels, episode trailers, as well as fan content in the form of let’s plays, character studies, and analysis. This is a hypertext that surrounds and augments the main experience in the spirit of Henry Jenkins’ transmedia, weaving a complex web of interconnected narratives and para-narratives. More twisty little passages than we ever could have wished for.

Games also go beyond the screen. Zombies, Run! (2012) by Six to Start is a mobile game played with earbuds in and phone firmly in your pocket. Players head out for a run, tracked by the device’s GPS, and pursued by the ravenous dead. Through a serious of missions they discover more about the zombie apocalypse, with each run revealing new aspects of the story and how this new world came to be.

In ’99 Bernstein did consider games as a possible home for hypertexts, but dismissed them with the argument that textuality and introspection do not fit the channel. That may have been correct then, but an explosion of different devices and form factors, coupled with a vibrant indie development scene, means that it is no longer true. So what does this literary perspective tell us about the Future Of Text?

I would argue that the Future Of Text is interactive, participatory, multimodal, multi-channel, transmedia, and not only beyond the page but beyond the screen. And it is Now.

The last ten years have seen an incredible explosion in rich interactive narrative experiences, perhaps we are so caught up in thinking about what text or hypertext could be that we don’t pause to consider what it already has become. Mark Bernstein asked twenty years ago – Where are the Hypertexts? – and now we know. Here are the Hypertexts. Around us all the time. In the very fabric of our modern entertainment.

Hypertext is the mass media of the 21st century.

David Owen Norris

Symbol & Gesture

Since the late eighteenth century, musical notation has employed the symbols < and >. They could be applied to single notes or extended to whole bars, even across several bars. Musical dictionaries in the early nineteenth century (eg. Clementi’s piano tutor of 1801) list a complex set of meanings for the signs, which include a general equivalence to the Italian words Crescendo (get louder) and Diminuendo (get quieter).

The early use of > is often ambiguous – that symbol can also mean ‘accent’, though it’s a different sort of accent from that signified by ^ (these latter signs are sometimes called ‘hat-accents’, and are usually considered to be more emphatic than >.) It has been argued that a large


could be understood as a broad accent even if it were intended mainly as a diminuendo, since an accent, by its very nature, is followed by quieter music. Schubert’s use of this sign is particularly rich in interpretative implication, and it is extraordinary that the latest ‘urtext’ edition of Schubert has made the decision to render all his > marks, which often stretch over half a bar, as small, localized signs – that’s to say, as accents pure and simple.

From at least the time of Beethoven, the combination of the two signs, often joined as a diamond (<>), had special implications. In the Moonlight Sonata, <> is applied to a figure of three notes of which only the middle one is on a beat. (This would generally mean that it would automatically be louder than the other two notes.) The piano cannot get louder during a single note, so the application of < to the first note cannot be taken literally, and since the second note, on the beat, will be louder in any case, the sign seems to be redundant. Had Beethoven merely wanted the middle note to be particularly loud, he could have added ^. The implication is that <> concerns something besides louder or softer. Within a few decades, both Mendelssohn and Schumann confirmed this by applying the sign <> to single notes on the piano.

This usage survived into the early twentieth century, in piano music by Schoenberg, but textbooks of that period abandon Clementi’s complex web of associations, and simply equate the symbol with the word crescendo.

Musical experiment suggests that the inner meaning of these signs has to do with what a musician would think of as ‘gesture’ (appropriately enough for a symbol). How musicians gesture, other than in a bodily way, is a complicated matter, but it employs various subtleties of tone (often including a demonstrative use of dynamics) and a rhetorical distortion of time. In simple terms, <> implies a stretching of the tempo – or what Liszt, in describing his orchestral use of the letters A & R, called ‘a sort of diminuendo and crescendo of the rhythm’ – what is often described as musical rubato.

Daniel Türk and the young Liszt himself had invented other musical symbols (mainly lines and extended boxes) to show how time should be distorted (and Arthur Bliss was to invent another, confined to a single work, in the 1920s), but eventually the nineteenth century entrusted this function to < and >. The twentieth century forgot this secret, and its performances (except, presumably, in that single piece of Arthur Bliss!) often became metronomically mechanical. Even those performers that did indulge in some sort of rubato did so in ignorance of the fact that the composer had shown them where, often where they least expected it: their merely instinctive distortions corresponded only loosely to the composer’s notated conception. It was as if < and > had been written in invisible ink.

David Price




the beginning was

order fluctuating into disorder

Word fluctuating into words

and more words and more words and more words



I seek

and strive to weave


into words

and words into order





David Weinberger

Punctuation as Meaning

In the Age of the Internet1 punctuation has taken at least three radical turns. Traditionally, punctuation’s meaning has usually been exhausted by its role in announcing the structure and standing of the words to which it applies. It tells you not only where a sentence stops, but what sort of sentence you were just reading. A question? An excited claim? An apposite thought hanging off the side of the sentence? Just a plain old assertion? In most instances in English, the punctuation mark provides this information only at the end of the phrase, although quotation marks do their pronouncing to get you in the right frame before the actual content is unrolled, and hash marks and dollar signs flag the meaning of the numbers they precede.

Now, in each of its three new turns, punctuation has taken on meaning.

First, emojis insert a graphical comment about the remark they’re appended to, lest you misunderstand the writer’s intentions...not that you would ever be that obtuse. Sometimes emojis flag that the writer isn’t all that serious in their remarks so you shouldn’t take them too seriously. That sometimes they’re even used as a type of code – for example, there’s a translation into emojis of the entire, long text Jews read at the Passover meal2 – shows that this new punctuation also carries its own semantic value: emojis mean something on their own. That’s not how punctuation traditionally works.

That semantic value has driven the development of emojis from a handful of emoticons – text commentary built entirely out of existing punctuation, such as the sideways smiley face – to 3,019 emojis as of March 20193.

Emojis were initially used in part because where the length of messages matter, they pack a whole lot of meaning into a single character. But they also address a problem inherent in the Internet’s global nature. When posting to a space where multiple publics will read what you write, it is easier than ever to be misunderstood. Emojis can make clear what your tone of voice or attitude is. They are, in some ways, the new body language of text.

Second, there’s the “#” that in some domains is known as a “hash mark.” When in 2007 Chris Messina set that mark to a new task, he called the resulting punctuation a “hashtag”4 because he thought of it as a way of embedding a semantic tag into a tweet so people could find other tweets on the same topic. Hashtags are punctuation that enable the text into which they’re embedded to be brought into contact with other texts.

Of course, they sometimes are used not to enable the aggregation of texts but to contextualize that text (“#SorryNotSorry”), the way an emoji does; semantic punctuation, like any bearer of meaning, can refuse to be easily categorized.

Third, and perhaps most important, there are hyperlinks, canonically expressed by blue underlined text, but far more commonly these days made visible by some page-specific font alteration. Hyperlinks bring billions of sources within the distance of two finger taps.

Because the nature of the relationship between the linked text and the other source is not necessarily expressed by the hyperlink, these links are infinitely versatile, but also therefore potentially confusing.

On the positive side, they visibly connect the current text to the rest of the world in which the text is embedded. They are a reminder that every text is just a momentary patch of a whole that renders it sensible.

They also allow writing to be more digressive without actually interrupting the flow that the writer is trying to establish. Or perhaps not. There is evidence that putting hyperlinks into text decreases the text’s readability, although a 2019 study suggests that it may also go the other way5.

Hyperlinks can also lead to authorial laziness that makes a text harder to read. For example, Wikipedia articles routinely hyperlink technical jargon to the jargon’s Wikipedia entry rather than explaining the terms in situ, making some articles impenetrable to those who do not enter them armed with an expert’s vocabulary or a jack rabbit’s agility.

Which is to say that hyperlinks, like any other form of punctuation, can be misused. Because of their semantic nature, they can lead to bigger problems than, say, a misused semicolon or the failure to use Oxford commas. But they also collectively do something huge: They create a traversable, semantically-connected world of ever-increasing value.

* * *

All three of these new types of punctuation – emojis, hashtags, and hyperlinks – are not only enabled by the existence of the Internet, they highlight much of what’s distinctive of – and often best – about the online world.

Emojis are an acknowledgement of the diversity of the publics that the Internet makes simultaneously addressable. A comment that one’s cohort would immediately take as sarcastic might seem deadly serious to readers from other communities and cultures. Emojis thereby recognize and navigate the differences among us while relying on the commonality of our facial expressions, body positions, and experiences.

Furthermore, their use can bring a lightness to one’s assertions. Even a serious comment about politics becomes less self-serious when it ends with a small cartoon figure.

Hashtags tacitly recognize the inconceivable amount of content on the Internet by providing a hook by which the specific piece of text you’re now reading can call up others like it. At the same time, if you insert a hashtag, you are probably assuming others are using the same string of text. You are thus acknowledging the dispersed community of interest and the worthiness of other texts that a search for the hashtag will retrieve.

Hyperlinks let the World Wide Web be a web in the first place. They have broken down the boundaries around topics. They have restructured how we write. And most of all, they have built a global place composed of the contributions of billions of people who insert links as enticements to readers to leave this site and hop to another … billions of small acts of authorial generosity.

Together, these three constitute a new type of mark: semantic punctuation that directly contributes to the meaning of the text in which they’re embedded.

At the same time, they acknowledge and simultaneously create the chaotic ways in which our shared digital space is entangling us all.

# # #


  1. Style guides have decided that the Internet is not a proper noun and thus should be spelled as “the internet.” Likewise for “the web.” These are silly positions that this essay will ignore.
  2. Martin Bodek, The Emoji Haggadah (Lulu, 2018).
  3. “Emoji Statistics,”
  4. Chris Messina, “The Hashtag is 10!”, Medium, Aug. 12, 2017
  5. Gemma Fitzsimmons, Mark J. Weal, Denis Drieghe, “The impact of hyperlinks on reading text”, PlosONE, Feb. 6, 2016

Dene Grigar

The Future Of Text May Require Preserving Text

If writing continues to be produced with proprietary software, then the Future Of Text will be one that requires an expertise in digital preservation in order for our intellectual output to survive the ravages of company bottom lines and endless technological upgrades.

This pronouncement is rooted in fact. Starting January 2020, Adobe will no longer support Flash software, a popular program that inspired a generation of writers to create visual, interactive, and cinematic text––and resulted in theorist Lev Manovich calling those adherents “Generation Flash.” Due to the company’s decision, the 477 Flash-based texts––acclaimed works of net art collected by the Electronic Literature Organization (ELO) in its archives––will no longer be accessible to the public. This number does not factor in the thousands of others produced but not collected by the organization that will also disappear from public view. Maintaining their accessibility will require a yeoman’s labor to do so, for it means preserving each with a tool like Rhizome’s Webrecorder or painstakingly remaking each with HTML5, CSS3, and Javascript, to name a few ways to save Flash work from extinction.

Other examples abound. Recent explorations of Tim McLaughlin’s floppy disks containing versions of Notes Toward Absolute Zero (NTAZ), for example, uncovered one that held a file called “Magel’s V1.1 text” saved with another file called “Magel V1.1.” While the floppy disk format poses its own challenges for accessing text, the software poses just as formidable a challenge.

The former file was produced with MacWrite, the word processing program bundled with Apple Macintosh computers beginning 1984 and available for System Software 1.x-7.x. By 1995, close to a mere decade later, Apple discontinued support for the program. Trying to open this text file on a vintage Macintosh Performa 5215CD, released in 1995, and running System Software 7.6––hardware and software one imagines would work––is an act of futility. Clicking on the disk, for example, opens a dialogue box that reads “Could not find the application program “Macwrite” to open the document named “Magel V1.1 text” and gives a selection of other programs to try, including Acrobat Reader 2.0, Simpletext, Stickies, and others. None of these, however, make it possible to read the text. To access this text means readers will need to locate the requisite hardware on which MacWrite can actually run.

The latter is a Storyspace file, a popular hypertext authoring system available today as Storyspace 3.0. That said, “Magel V1.1” was produced with a version of Storyspace 1.0 and so can be opened with Storyspace 1.5, a version that runs well on Macintosh computers running System Software 7.x but not on contemporary computers running contemporary software. This particular text is an important artifact in that it is a version of NTAZ that shows the evolution of the hypertext novel in its digital form, providing scholars with insights about the author’s edits and innovations to the work, such as originally titling it “The Correspondence of Magel Constantine: A Philatelic Novella” and introduction of the interface of postage stamps that serve as a navigational system for the story. To read it, therefore, requires finding suitable vintage hardware and software that will open the file. While it opens easily on the Performa mentioned earlier, the text cannot be copied for other readers who do not have access to vintage computers because Notes Toward Absolute Zero and 47 other works of hypertext like it were published by Eastgate Systems, Inc. and are under copyright with the company. As such, they cannot be migrated, emulated, or copied without permission from the company.

Those of us who have been lulled by the ubiquity and stability of Microsoft Word should take heed if using it at a reduced cost via an organization’s license. If the software license is not renewed by the author, accessing the text produced with it will require it be opened in an open source program like OpenOffice or LibreOffice, to name a few. But this is not a straight-forward process. Users of OpenOffice cite problems with a lack of reliability when trying to open text produced in Word; users of LibreOffice report they cannot open Word documents at all.

The solution to a future with the text we value is creating that text in a format that has a chance of remaining accessible, of being easily shared, copied, migrated, and emulated––else, readers will need to become experts in digital preservation techniques in order to access text.

Denise Schmandt-Besserat

From Tokens

As an archaeologist, I take pride in having documented the first texts of the world, written in Mesopotamia in 3200 BC. (1) It is therefore a pleasure, but also a challenge, to be asked to predict the Future Of Text in the centuries to come. I propose to conduct this risky endeavor by investigating the dynamics that compel or prevent text to evolve.

What Is Unlikely To Change

A text originates in one or a team of human brains. Since Homo Sapiens-Sapiens’ brain has not tangibly evolved in the last 50 000 years, it seems safe to assert that, as long as humans will rule the world, the content of texts will always deal with such traditional interests and aspirations as power, wealth and business; hope, love, grief and despair; the quest for truth in science or religion; daily preoccupations such as the weather, news, health, fashion, and of course, gossip.

Once mentally formulated, text is inscribed onto a surface by way of a script. Scripts do not tolerate change because any variation in their form threatens havoc in communication. As an example, today the Chinese can read their first texts dating back to1500 BC because the archaic characters are still recognizable.

What Is Likely To Change

Sometimes as rarely as every two or three millennia, following major socio-economic and political events, society compels text to re-invent itself to meet the relentless need for increased communication and data storage: ca 7500 BC tokens managed the public goods of prehistoric communities; about 3500 BC, texts traced on clay tablets are characteristic of the Mesopotamian city-states; ca. 1500 BC, the alphabet, written on parchment or papyrus, fulfilled the needs of the Bronze Age empires; in AD 1440, in response to increased literacy, the printing press multiplied paper copies of the texts previously written by hand with a reed, a feather quill or a pen; the 20th century global economy produced digitalization, which diffuses texts instantaneously on a screen of synthetic material to billions of people over the entire planet.

Each increment in the evolution of text masters data in greater abstraction: Token markings abstracted one type of goods; phonetic signs abstracted the sounds of a word denoting goods; each letter of the alphabet abstracted one phoneme of a language; the printing press separated texts from writers. To-day’s digital system is based on series of digits 0 and 1, generated by graded electrical signaling. What is extraordinary is that a digital text must be reverted to a familiar script - in the West, alphabetic writing - in order to reach the public.

Text In The Future

Two contrary forces impact text. First, the fear of change is responsible for millennia-long periods of inertia. Second, the pressure to increase the volume and speed of communication and data storage, brings about brilliant innovations. I therefore dare predict that, when digitization will no longer suffice to handle the data generated by our global society, a new form of text will emerge. The future text will be characterized by communicating and storing greater quantities of data in a yet more abstract form. This scenario will be repeated until the end of society as we know it.

Tokens were counters created around 7500 BC by the early farmers to manage their agricultural resources. Each token was the symbol for a given quantity of a product. In other words, each token was the abstract representation of a specific unit of merchandise. For instance, a small cone stood for a small quantity of grain and a large cone for a large quantity of grain. About 3300 BC, debts were recorded by enclosing the appropriate number of tokens in a clay envelope. The first texts were created when, in order to conveniently display the amount of a debt, the tokens held inside an envelope were impressed on the surface of the envelope.


  1. Denise Schmandt-Besserat, How Writing Came About, The University of Texas, Austin 1996.

Derek Beaulieu

If You Don’t Share You Don’t Exist

Art should comfort the disturbed and disturb the comfortable. Every good artist paints what she is. Art is either plagiarism or revolution. Through others we become ourselves. When people are free to do as they please, they usually imitate each other. Imitation is not just the sincerest form of flattery - it’s the sincerest form of learning. Invention, using the term most broadly, and imitation, are the two legs, so to call them, on which the human race historically has walked. There is only one thing which is generally safe from plagiarism -- self-denial. Successful is one whose imitators are successful. Imitation is criticism. Imitation is human intelligence in its most dynamic aspect. Poetry can only be made out of other poems; novels out of other novels. There is no such thing as intellectual property. I am just a copier, an impostor. I wait, I read magazines; after a while my brain sends me a product. Writing is a public act, we must learn to share our work with a readership. See our work as worth sharing, our voices as worth hearing. Share. Publish your own work. Publishing builds community through gifts and exchange, through consideration and generosity, through the interplay and dialogue with each other’s work. You are out of excuses. Readers are a book’s aphorisms. Art is a conversation, not a patent office. If you don’t share you don’t exist. Nothing is original. Steal from anywhere that resonates with inspiration or fuels your imagination. Devour old films, new films, music, books, paintings, photographs, poems, dreams, random conversations, architecture, bridges, street signs, trees, clouds, bodies of water, light and shadows. Select only things to steal from that speak directly to your soul. If you do this, your work will be authentic. Authenticity is invaluable; originality is non-existent. And don’t bother concealing your thievery - celebrate it if you feel like it. It’s not where you take things from - it’s where you take them to. Poets are now judged not by the quality of their writing but by the infallibility of their choices. Immature poets imitate; mature poets steal. Don’t protect your artwork, give it away. For every space you occupy, create two. I am quite content to go down to posterity as a scissors-and-paste man. Publish other people. Give your work away. Post your writing online for free. Embrace the unexpected. Encourage circulation over restruction. Give it away. Generosity is always sustainable. In theory, there is no difference between theory and practice. But, in practice, there is. Rules are guidelines for stupid people. Poetry has more to learn from graphic design, engineering, architecture, cartography, automotive design, or any other subject, than it does from poetry itself. The Internet is not something that challenges who we are or how we write, it is who we are and how we write. We measure success by how many people successful next to you, here we say you broke if everybody else broke except for you. The rest of us just copy.

Doc Searls

The Future Of Text

All I know about what I’m writing here is what I’m saying. Not how it looks.

I don’t know if you’re reading this in serif or sans serif type, whether it is single or double spaced, what size the type is, or whether the letters are kerned. I’m leaving all of that up to others.

What I see while I write this is in the default typeface and size set by my email client for writing in plain text mode. That’s how I like it in cases where the appearance of text is left up to the reader.

Tim Berners-Lee was of a similar mind when he designed HTML as a minimal way for writers to format text, leaving the rest up to the reader’s browser. The deep simplicity behind that original design decision for the Web is one of the biggest reasons why the Web took off. And ignoring that design decision is one of the reasons the Web has since then kinda gone to hell.

The Web, which began as an easy way for anyone to write and read online, has morphed into a medium for publishing, broadcasting and being social on giant private platforms. In the process of this “progress,” simple HTML editors have gone the way of cave painting, and so has the reader’s control over how text is presented and read on the Web. If you want to write and publish online today, your easy choices are messaging apps and social media platforms. And on both what you write is as evanescent as snow falling on water.

If you want to write seriously online today, your main choice is a blogging platform with pre-fab formatting that is almost entirely outside your control—and the readers’ as well. And, while your output will be a bit more durable than what you’ll get from messaging and social media, it won’t be especially yours as a writer or publisher. Or likely as durable as print saved on paper.

Consider this: all domains on the Internet are rented (meaning they are only “owned” as long as a registrar gets paid to maintain them). Fashion, technologies and domain owners change almost constantly as well. As a result of those facts, everything on the Web is also snow falling on water.

This is why I believe the biggest challenge for text in our time is to make writing in pixels as durable as writing on paper.

Don Norman

Old Technologies Seldom Die

Before the age of writing, we had storytelling. That is still with us today

With the development of writing systems came many advances. The printing press enabled story-telling. And how-to manuals, and pornography. All of these are with us today.

With the development of film and then video, we had story-telling, where the textual part was often hidden behind the scenes in scripts and editorial/directorial instructions.

With today's technology, we have text to enable us to use the technology. And we still have books and novels, texts and poetry. Illustrated books and books of deep analysis. And storytelling. And pornography.

The Future Of Text? It will live forever, perhaps dynamically changing shape and form (as many digital artists already do with text). Or perhaps it is a liquid, flowing, connecting, linking up, each word to the world of every day, providing an endless trail to follow, seeking the end of the rainbow, always getting close, but never there. Maybe it will change according to the identity, mood, or need of the reader. But it will still be text.

But a fixed, permanent textual narrative, whether it be a story or an instruction manual, a report of a scientific experiment or a governmental white paper has many advantages over other forms of communication. First, the author is in control, forcing the reader to follow the story precisely as the author intended. Second, it is static, so it can be relied upon by all readers, on any date. And because text is language, it can use all the techniques of language to convey factual information or emotions, historical, fictions or speculative ideas, precision or deliberate ambiguity.

Old technologies seldom die. Text has been with us for many thousands of years. I predict it will be with us for many thousands more. After that, my simple mind cannot predict.

Douglas Crockford

The Future Of Text

A popular, if ineffective, technique for predicting the future is to recall where we were, observe where we are, and then extrapolate. If this technique were effective, then we would not be surprised by the March of Progress, but we inevitably are. It is always safe to say that no one fully expected that we would ever be where we all are right now.

There are some who say that they can and did. They may have evidence that they once said that things were going to get better, or that we are all actually doomed. But looking back on vague projections falls far short of meaningful prediction. But looking past the futility, what is the point of talking about the future if we do not offer predictions?

A very safe prediction is that text will be carried less on paper and more on networked screens. There is little predictive power in predicting something that is already occurring. Trendspotting is not prophesy.

A growing fraction of the world’s text is being generated and consumed in the form of JSON between machines in the internet of everything. Interhuman communication, while in decline, is not going away.

There is a long standing tradition for each generation to criticize the destruction of standards of language usage by the succeeding generations. IMHO, such whining is pointless. Language is constantly being refactored to increase efficiency as we discuss new things, and to decrease efficiency to compromise outsiders. As we change the world, necessarily the way we talk, the way we write, and the way that writing is transmitted change, sometimes in ways that the obsolete generations can not fully understand.

It was predicted that the internet would bring about collective consciousness, that the hive it enabled would collect and amplify all of humanity’s intelligence, quickly converging on a consensus of great truth. Instead, the global village has splintered into a vast array of little angry echo chambers. This is a great time for trolls.

OTOH, writing used to a profession that could be paid by the word. With the destruction of books and periodicals, writers now need to monetize themselves. It is not enough now to say something good. It is also necessary to say something that will resonate in the echo chambers (hashtag #hashtag). Good writing is not sufficient. Alignment with popular memes is a thing (hashtag #meThree).

IRL truth becomes more fluid. It is well accepted now that computerized autocorrect tools ironically create errors. (I am looking at you, autocorrect tool developers.) Sometimes those errors can make you LYFAO. Less often, people die. FWIW, text is losing ground to other modes cuz TL;DR (hashtag #facepalm). WTF?


AFAIC, I hate it when people laugh at their own jokes. LOL!

Note: I did not invent JSON. It existed in nature. I only discovered it.

\ c /

\\ //

\\\ : ///

\\\ : ///

\\\ : ///

\\\ : ///

\\\ A : B ///

\\\ : ///

years \\\ : ///

+500 \\\ : ///

+50 \\\ : ///

+10 ///

now ≡ ________________________FUTURETEXT___________________________|

-10 | |||

-50 | |||

-500 make meaning: | hot: |||

-5000 1 faster | 1.5 C hot |||

2 forever | 3.0 C hotter |||

| + |||

who: | |||

a users | time=money: |||

b players | a yes |||

| b no |||

how many: | |||

a 1,000 thousands | accountable: |||

b 1,000,000 millions | a ecocidal |||

c 1,000,000,000 billions | b ecogenerative |||

| |||

how: | programmable: |||

8 audio #vocaltext | 1 future |||

8 visual #pix8 | 2 money |||

8 context #twexttranslation | 3 memory |||

| |||

what: | time=memory: |||

+ plain text | 2010 gigaabit hours |||

+ tag | 2020 terabit weeks |||

+ sort | 2030 petabit decades |||

+ order | 2040 exabit millennia |||

+ sync | |||

+ media | config: |||

+ risk | > hash validate |||

+ reward | > petname |||

+ action | > copynot |||

+ game | > router |||

| > trade |||

play: | |||

a game over | program future text |||

b insert coin | a ecosystem |||

c play | a medium |||

| |||


<<<<<<<<<<<<<<<<<<<<< .100 .010 .001 >>>>>>>>>>>>>>>>>>>>>>>>>>

< >

< yes SRSLY? no >

< >



Duke Crawford


Ed Leahy

Evolution and the Future Of Text

According to the Smithsonian National Museum of Natural History[1], humans evolved from apelike ancestors. Languages apparently evolved slowly over time from their animal utterances such as grunts, screeches, barks, moans and the like. In time, however, cave dwellers tried to magically influence events at a distance, events of survival like a hunt for food with pictographs ala the cave drawings of Altamira. Today, in what is now called “The Information Age” there has been an explosion of revolutionary inventions that more directly and powerfully influence events at a distance.

When Marshall McLuhan told the world “The medium is the message”, I remember lots of smart people were quite confused. What he meant, in retrospect, now seems simple. McLuhan was referring to the often revolutionary changes caused by a new media or the means of communications very existence.

Oddly, however, revolutionary inventions haven’t always been invented for their ultimate revolutionary use. For example, James Watt invented the steam engine to pump water out of coal mines. But, when the steam engine was later used as an engine for the locomotive it became a media and a more powerful means of communication that could mightily effect events at a distance, like speeding up the development of the American West. Marshall McLuhan discussed the effect of innovations in his text “Understanding Media”. The subject at hand, however, is the Future Of Text.

The printing press and movable type, both had large effects in creating the massed produced book or text. Today, the computer and the world wide web, with its plethora of applications and sources of information, creates the feeling that there is nothing left to add. So, when one asks the question, What’s the Future Of Text?—that is anybody’s guess. If the future is anything like the past ,the Future Of Text will be decided out of the blue by a new, possibly unrelated and unpredictable innovation. Maybe a mentalist will team up with a computer geek and create an application that allows an author to simply mentally project his thoughts into a computer which then arranges the idea in the most convincing and dynamic form.

At this point in time, an idea such as that might sound impractical, foolish, even insane. But, in 1903, the Wright Brothers, two unknown bicycle mechanics from Dayton, Ohio taught the world how to fly. Unfortunately, 11 short years later men were dropping bombs from airplanes in WW1. So, the wonder of the airplane, which has changed the way we live for the better also dropped two atomic bombs on Japan leveling the cities of Hiroshima and Nagasaki.

Writing a textbook today can use all the technological wizardry of the computer, the world wide web and its myriad of sources of information to inform or, help verify an author’s contentions and conclusions. However, these days a Text or, Texting has taken on another more dangerous meaning. This kind of Text has now become common practice and comes in many different forms. Texting could be the weapon of mass destruction also built into the wizardry of the computer and world wide web.

Mark Twain once said that “A good lie can get half way round the world before the truth can get out of bed”. We now have the digital means to literally do that.



Elaine Treharne

The Future Of Text

The Future Of Text depends on knowing its past and appreciating its fullness. text is a semantic field; its superordinate represents all forms of intended and potentially meaningful human communication. As such, not only are we to acknowledge text as including images, symbols, sounds, gestures and visual cues, but we should also strive to understand as fully as possible the very long history (back to 70,000BCE) and whole story of information production and reception.

A model that permits access to the holistic investigation of text has been developed by Stanford Text Technologies at All texts are comprised of four principal components: intentionality + materiality + functionality +/— cultural value. This model asks: what did the producer intend in their communication; how did they produce that communication; what function does the text have in the real world; and what kind of value is attributed by society to that textual object? These components are dynamic and historically contingent. A tiny, dark red, octagonal piece of paper, measuring 29 x 26 mm, weighing 0.04g, and containing a few words of initially indecipherable writing, might be easily tossed away. Its actual material value is slight; its textual value seemingly minimal. Knowing it is dated to 1856 would lend it historical interest, and noting the printed word “guiana” in the middle of its bottom margin would alert an informed reader that it might be an interesting artefact from a global perspective. Recognizing, further, that it is stamp-like would encourage the discoverer to undertake philatelic research, perhaps. That research would lead the investigator to the understanding that this object, worth one cent at its 1856 point of origin, is indeed a postage stamp, issued and signed by the post office clerk, E.D. Wright. It is not simply a rare postage stamp, though; it is unique, and, despite having been touched up at some point in its (not so long) history, it sold in 2014 for almost $9.5million ( For this little paper text (where the words and image read “Damus Petimus Que Vicissim.” around a depiction of a ship, framed by “british guiana, postage one cent”), originally intended to permit the passage of a larger communication, its cultural value has become its dominant component. It is no longer a functional stamp, and its ephemerality and time-boundedness are, in effect, of no consequence. Its history and its authenticity (it has been “expertized” twice) combine with its scarcity to turn the inexpensive into the prized; the ephemeral into the permanent; the utilitarian into the auratic. Here, the “text” (its purpose and meaning as payment for carriage) is not the primary consideration.

The capacious category of text (the words, images, medium, and context) is fluid, then, transforming through time; but interpretation is always centered on a material artefact formed from an originator’s desire to produce meaningful information. Knowing that every textual object—whether a verbal conversation (where the materiality is the sound wave and paratextual gesture), a film, a Tweet, cuneiform tablet, smoke signal, or manuscript book—is comprised of intent, form, function and value (all of which are present in varying degrees at any given time) means that recurring patterns in the long history of Text Technologies can be identified, described, and evaluated. Textual artefacts share common attributes, as well as the four core components, and these attributes can be isolated for an analysis across time, space and form that enhances and elaborates upon the words or symbols that are conveyed by the technology. In the example of the One Cent Magenta Stamp, perhaps the major attributes of the item are uniqueness and authenticity. For other textual objects, such as an ordinary index card, neither of these attributes has much significance: in this case, economy of form, interactive facility, ubiquitousness, and storage capacity are among the more dominant characteristics. The individual index card is one of the most perfect and successful of text technologies—akin to a folio of a book: fold the card and it’s an opening (two folios)—a mini-book; roll it up and it becomes scroll-like. Looking at the long evolution of the last five thousand years of the ways in which textual objects are produced, patterns in the materials, the format, and the modes of use become clear. Large fixed artefacts for communication (cave-paintings, the 1941 Z3 computer) evolve to smaller portable technologies (ancient tablets, the personal laptop); hard and heavy materials (clay, stone, wood) give way to flexible, light substrates (papyrus, membrane and paper); the individuated object, often with simply a front and back, develops into a form with multiple parts (tablet to diptych to codex; the daguerreotype photograph to film reels). Attributes that motivate transformation throughout history in the production of text include storage capacity, portability, accessibility, interactivity, and durability; information creators and disseminators now want the ability to permit the largest amounts of data in the smallest practical space with the greatest room for users’ interaction with the technology and its contents. It was, in fact, ever thus.

Knowing about the history of text—its creation, transmission, and reception, as well as the way in which cultures value a particular text and its vessel—allows us to see that the same components and attributes are present in informational objects throughout recorded time, though often distributed differently. Similar patterns, but with significant variation in scale of information and size of object, are seen in the development two thousand years ago of the rigid wax tablet to the pliable membrane and in the present-day desire to modify the rigid mobile ‘phone to a foldable ‘phone, or the heavy (but thin), rigid television to a pliable, roll-upable, OLED screen, as advertised in 2019. With large datasets of text technologies available in open access repositories internationally, research is demonstrating that algorithmically detectable patterns can be employed not only to trace the evolution of textual objects themselves, but also to predict their development, longevity, and success. The Future Of Text/text is thus attestable by its past; namely, by investigation into the long history of communication and that history’s inevitable repetition into the future.

Élika Ortega

Print-Digital Literary Works

Even the slightest glimpse at the influence of digital media on literary production in the recent past resoundingly confirms Lisa Gitelman’s famous argument that “looking into the novelty years, transitional states, and identity crises of different media stands to tell us much” (1). Indeed, the many new literary forms and genres that have emerged following the technological developments of the last four decades (text ad-venture, hypertext, interactive fiction, animated poetry, locative narratives, literary games, augmented reality poetry, etc.) speak not just of experimentation and innova-tion, but of a profound reimagination of the print codex and digital devices as con-veyors of literary text and meaning. Not surprisingly, the reimaginations to literary text and books have also benefited from and precipitated modifications in the publishing world, and ultimately to a sense of cultural legacy in the contemporary world.

Fig. 1. Robert Pinsky’s Mindwheel. Sleeve, print book, instructions card, and diskette.

A particular manifestation of the still ongoing novelty years are print-digital literary works that meaningfully “bind” a codex with a digital application. I call this phenome-non “binding media”. A key characteristic of these works is that often their text hap-pens at the intersection of print and digital media rather than being different versions of it. Examples of this practice can be found as early as 1984 when Synapse and Brøderbund, two software companies, produced and published Robert Pinsky’s elec-tronic novel Mindwheel (Fig. 1). William Gibson, Dennis Ashbaugh, and Kevin Begos’ legendary work Agrippa (A Book of the Dead) (1992) can also be counted among these works. Others produced since then run the gamut of digital innovations taking us to the present moment. Two more recent examples that I wish to examine in some detail are Belén Gache’s El Libro del Fin del Mundo (Fig. 2) published in 2002 in Bue-nos Aires, and Amaranth Borsuk, Kate Durbin, and Ian Hatcher’s ABRA (Fig. 3) re-leased in 2015 in the United States.

Fig. 2. Belén Gache’s El Libro del Fin del Mundo. Print book and CD-ROM.

Fig. 3. Borsuk, Durbin, and Hatcher’s ABRA. Artists’ book page and iPad screenshot

Gache’s El Libro del Fin del Mundo (LFM) tells the story of book destruction and recovery. From the beginning, Gache presents LFM in a metafictional fashion as the resulting compilation of the recovery efforts in the overarching story. In total, the col-lection gathers seventy-five “found” fragments, seven of which are digital pieces con-tained in the accompanying CD-ROM. Gache makes her readers move from print co-dex to a computer playing the CD-ROM sequentially—following the linearity of the print pages—through the many episodic fragments. In this process, the collection of text emerges; moving from print to digital objects, the reader rehearses the recovery of text fragments. In addition to her use of print and digital media to advance her story, Gache also resorts to bibliographic marks to ensure the dynamics of her work take place. The print pages acting as placeholders for the digital fragments display a clipart icon of a hand holding a compact disk (Fig. 4). This digital “manicule” —or perhaps a more fitting name in this case would be “digit”— is a concise bibliographic mark through which Gache giver her readers directions and puts into practice the continuity between the two media of her work. In a case like LFM, a single literary text not only “lives” in two media, it is only actualized as the reader follows the narrative and bibliographic directions given.

Fig. 4. A placeholder page in LFM with a “digit” instructing the reader to go to one of the digital fragments in the CD-ROM

Borsuk, Durbin, and Hatcher’s ABRA, though in essence relies of the same media (two codices in this case, and an iPad/iPhone app), functions quite differently. As a collection of poetry, ABRA operates through recycling, erasing, and combining words that mutate from one page to the next one—the plasticity of the text is the central dynamic and poetics of the work. Across its three objects, ABRA presents the same collection, however, the identical texts appear to the reader and shift in the way that each medium best affords it. The artists’ paperback functions as a flip book where text and image are animated with the rapid movement of the pages. The iOS app takes full advantage of the touch screen prompting readers to manipulate and animate the poems’ words through a variety of prescripted “spells”. Finally, the artists’ book amplifies the motif of textual tactility in the work through the use blind impressions, heat-sensitive disappearing ink, and laser-cut openings. The niche carved out on the back of the artists’ book is meant to encase an iPad running the ABRA iOS app. This encasement, like the digits in LFM, guides the readers’ handling of the work and materializes the continuity of textual media. In a work like ABRA the plasticity of the text is made possible in each instance by the affordances of each object, and highlights how through all its media, the readers’ hand make the text happen.

Many more examples of “binding media” exist, and I can only begin to outline their characteristics and significance to literary criticism and book studies here. How-ever, as can be seen from these two examples, and as mentioned earlier, in print-digital works the literary text, indeed the meaning of the piece, is distributed across media. Further, the text is by no means just the words printed, etched, or scripted, but the objects themselves and the very handling of them. The readers’ handling is a form of interactivity alternative and complementary to that afforded by the digital ap-plications and devices. The manual use of two objects is critical to produce literary, poetic, or narrative meaning. An immediate result of these dynamics is a marked self-reflexivity of the materiality of the texts, but also of the purposeful act of reading de-manded by them. This process, in the same way as Katherine Hayles’ has argued about technotexts, “unite[s] literature as a verbal art to its material forms” (25). But these works go beyond that since, in addition to the material forms, the actions per-formed on them, are part of the text. A clearly experimental practice, marginal even to electronic literature, print-digital literary works grant writers the possibility to create new meanings at the intersection of textual media; to renew extant questions about the linking mechanisms of hypertexts and the web at large; and to ask their readers to reconsider what the text is when they read and what they do with it.

Print-digital works have historically, and in some cases perhaps de facto interro-gated the anxiety associated to the relationship of print and digital media. The many recurring historical fears of media succession and a bookless future most succinctly summarized by Leah Price in “Dead Again,” offer a useful backdrop to reflect upon the critical significance of works “binding media”. On the one hand, the way these works have been created underscores the specificity of media—how some texts and their intended effects best live in one or another medium. These texts put into prac-tice media coexistence and continuity, rather than succession. On the other hand, the creative needs of authors’ writing these texts have put pressure on publishers to take on bold bibliographic projects. Importantly, although large publishers have launched titles (Iain Pears’ Arcadia published by Knopf) or series (Penguin’s 2012 partnership with Zappar) bringing together print and digital media, this type of artifact has been dominated and best carried out by independent presses around the world.

Ultimately, print-digital literary works produce compelling temporal effects for our conception of literary text and technological development. Often argued to be the future of literature, the book, and the publishing industry, works “binding media” have not just not become that future at any point since they first started to be pub-lished. But also, works that otherwise would be current and contemporary have de-cayed along with the technologies that made them possible and exciting. For these works, the resilience of the print codex foregrounds the rapid obsolescence of the digital applications, the shifts in storage media, and their distribution. In that sense, print-digital works are radically grounded in our fast paced technological contempo-raneity. The mixed materiality of “binding media” embodies the historicity of textual media change and stands as the archaeological remains of the very specific media moment in which they were created and published—one that we can look back to in the future for answers about literary publishing in the late twentieth and early twenty-first centuries. They are simultaneously future looking and on their way to obsoles-cence.

Works Cited:

Borsuk, Amaranth, Kate Durbin, and Ian Hatcher. Abra. 1913 Press, 2015.

Gache, Belén. El Libro Del Fin Del Mundo. Fin del Mundo Ediciones, 2002.

Gitelman, Lisa. Always Already New: Media, History and the Data of Culture. MIT Press, 2006.

Hayles, N. Katherine. Writing Machines. MIT Press, 2002.

Pinksy, Robert. Mindwheel. Synapse & Brøderbund, 1984.

Price, Leah. “Dead Again.” The New York Times. 10 Aug. 2012.

Esther Dyson

A Few Words About Words

“Words are not in themselves carriers of meaning, but merely pointers to shared understandings.”

— David Waltz Thinking Machines

“We’ll always have Paris.”

— Humphrey Bogart/Rick to Ingrid Bergman/Ilsa

Contrary to the picture worth 1000 words meme, the 14 plus 4 words above convey more meaning than thousands of pictures, precisely because they are symbols; they contain multitudes of examples.

David Waltz’s quote describes most of human language and understanding. The tiny word Paris might mean a distant love affair recalled in Casablanca (Casablanca, the movie)… or it might be a reminder of the fragility of religious monuments, an homage to Notre-Dame. Or it might recall the bricklayers of legend building a cathedral, a single image, who were asked what they were doing and gave answers at three levels of meaning…

The first: “I’m laying bricks, dude!”

The second: “I’m building a cathedral, brick by brick.”

And the third: “I am honoring God.”

Okay, enough examples!

I love words precisely because they are symbols; their meanings exist only in interpretation.  Outside a [human?] mind, they are simply physical artifacts: sounds, shapes on a page, calligraphy, whatever.  Words are both much richer and more efficient than pictures or other representations  - they can carry so much more meaning - and more efficient.  Yet they are abstractions of what each person intends to send or receives; they expand in the mind.  

But those meanings can be very broad and rich. Words can expand themselves as metaphors - transformations or projections of meaning. In essence, metaphors recognize common, innate properties across seemingly dissimilar, separate things; they classify things in a way that a programmer might  call a mistake - for example, if an AI confused a data-storage/-manipulation service with a patch of humid air. In fact, creativity is a feature that operates like a bug.

Words have multiple levels of meaning:  They can be specific or abstract; in addition to the shared understanding of reality – when it is shared! - they can also transmit the speaker’s/writer’s intent, or their (desired) social status, or all kinds of cultural references – inadvertent or otherwise. For example, there are usages that are meant to indicate breeding, but that do the opposite: Take for example “between you and I”…until hopefully  they become generally accepted even though they are technically “incorrect.”  

Words also change meaning - and what they indicate about the user - over time.  Swear words have become commonplace, but there are also trigger words, such as “get over it” or “he’s so articulate” or “she’s cute, but really smart” or [your favorite clueless comment here].  

Indeed, words are symbols, whereas pictures (and videos) are examples. Where I, a white woman, might see a photo of another white woman as representing “mother,” another person – a black child, for example - might see it as the teacher who simply cannot remember how to pronounce her student’s name. The meaning (whatever it was) is easily mistaken.

Indeed, pictures and most online videos generally lack deep structure or logic. They are “just sayin’;” they are not explaining or asserting a point of view. A neural net that recognizes/classifies images ultimately is not as smart or flexible as an expert system that can understand logic or understand the goals, function or behaviour of the things it encounters.

Thus, our increasingly inarticulate though vivid new world of images and videos is troubling, as it caters more to emotion than to thought. We classify and react to things rather than understand them. You need words to argue productively and thoughtfully, but so much of current discourse is sound bites and images, without logic or narrative to make a point.

That’s why I love words, even though they too may be beyond the ken of a listener, let alone an AI. Just consider “I love you.”  It ranks up there with “Let’s have lunch” or “I’ll have it for you shortly!”

Esther Wojcicki


Text has been part of mankind’s history for as long as humans can remember. Can you imagine a world without the Old Testament or the New Testament or the Quoran or the writings that are part of Hinduism, Buddhism or Sikhism? Text is key to these influential books.

According to the March 2007 edition of Time, the Bible “has done more to shape literature, history, entertainment, and culture than any book ever written. Its influence on world history is unparalleled, and shows no signs of abating.” With estimated total sales of over 5 billion copies, it is widely considered to be the most influential and best-selling book of all time. As of the 2000s, it sells approximately 100 million copies annually.

The origins of writing or text appear during the start of the pottery-phase of the Neolithic, when clay tokens were used to record specific amounts of livestock or commodities[1], according to Wikipedia.

Text is the communication tool of choice except for voice. Human beings are social animals and they need to communicate and thus they need text in all mediums. As long as the future of human beings is secure, the Future Of Text is secure because humans need to communicate. Some may say that the future of humans on earth is at risk and it does seem more precarious today with the advent of climate change and COVID19, but what is certain is that mankind, looking at historical accounts, has survived plagues, war, natural disasters in the past. Certainly they will survive again and thanks to text they will be able to communicate, plan, and execute plans. Humans have survived and thus so will text.

What has changed in the last 30 years is the medium of text, 500 years ago it was paper and even 40 years ago it was paper. In fact, text will grow, not shrink. Today the most popular medium is electronic via a mobile device giving everyone the opportunity to communicate easily and even be a citizen journalist using sites such as Medium, Facebook, Twitter, Blogger, and more. In fact, this new medium has allowed text and communication to grow beyond the wildest dreams of our ancestors and even beyond the wildest dreams of people who lived just 50 years ago. Who would ever have expected that a man in Los Angeles could know within minutes what was happening in Rome? Or that family members who are on the other side of the planet can be in touch in seconds.

Can you imagine where you would be without texting today? Or where would billions of people be without texting or without having the ability to send a message that you are not coming or that you will be late? Without being able to coordinate? Today there are trillions of messages sent every hour. It fosters independence and creativity. Texting or SMS (short method service) that was first developed in Germany in 1984 by Friedhelm Hillebrand and Bernard Ghillebaert. The development was slow. In 1999, texts could finally be exchanged between different networks, increasing its usefulness. By 2000, the average number of text messages sent in the U.S. increased to 35 a month per person. But in 2007 marked the first year that Americans sent and received more text messages per month than phone calls. Social media sites like Twitter adopted the short character format, which helped the text message phenomenon — and thus the world learned to be more character-conscious and concise.

Text is the basis of all news sources and all communication involving news. Innovations in text transmission has dramatically increased the news around the world. Real time newswire disseminate information almost instantaneously. News is the basis of all our decision making, all our hopes and fears. The transmission of information about the Coronavirus has thrown the world and billions of people into a panic never before seen in the history of man. Text is the basis of transmitting this information; it has been and will always be. The day after a news story it is no longer news. It is already history.

Text is used in history books and in books of all genres and today these books can be accessed in digital format as well as hard copy thanks to Amazon and Google Books. Most libraries around the world offer digital textbooks. Text is used in all areas of business, all areas of medicine, all areas of academic pursuits, all areas of human interaction. Just think of any kind of human interaction, and you will see text playing a major role.

Text is key in education and learning to read text is the main goal of our education system worldwide enabling the transmission of learning from generation to generation. Literacy rates around the world are at the highest level they have ever been. The global literacy rate for all males is 90.0% and the rate for all females is 82.7%. The rate varies throughout the world with developed nations having a rate of 99.2% (2013); Oceania having 71.3%; South and West Asia having 70.2% (2015) and sub-Saharan Africa at 64.0% (2015), according to Wikipedia.

Historically, the textbook has been the vehicle for transmitting information in education. John Issitt in the journal History of Education wrote “the negativity surrounding textbooks in terms of use and status as both literary objects and vehicles for pedagogy is profound.” Today even with electronic textbooks the sentiment remains but is not as strong. Textbooks, according to Issitt are “particularly hated by academics who feel they reflect no creative input,” but that was before the electronic textbook which is now widely accepted because it has the advantage of graphics, audio, desktop simulations and video. Also, today students can rent textbooks making them easier and more affordable.

In summation, text has played a major role in the lives of humans, and it will continue to do so at even increased levels thanks to the use of electronic devices.


1- "Beginning in the pottery-phase of the Neolithic, clay tokens are widely attested as a system of counting and identifying specific amounts of specified livestock or commodities. The tokens, enclosed in clay envelopes after being impressed on their rounded surface, were gradually replaced by impressions on flat or plano-convex tablets, and these in turn by more or less conventionalized pictures of the tokens incised on the clay with a reed stylus. That final step completed the transition to full writing, and with it the consequent ability to record contemporary events for posterity" W. Hallo; W. Simpson (1971). The Ancient Near East. New York: Harcourt, Brace, Jovanovich. p. 25.

Ewan Clayton

Five Thousand Years

This summer I acted as advisor to an exhibition about writing held at the British Library. Writing: making your mark covered 5000 years of writing and encompassed over forty writing systems. We held the exhibition because it feels like we are at another important moment in the history of writing. Sixty years ago when the computer began to be reconceived not only as a mathematical and statistical tool but also as a writing machine in succession to the quill pen, the printing press and type writer we saw an explosion of new tools for handling text both on the screen and in the scanners and printers that computers became linked to. But from the early days at Xerox PARC in the 1970s the Alto networked desktop computer had already incorporated a wide vision for what a computer might do. ‘The major applications envisioned for the Alto were interactive text editing for document and program preparation, support for the program development process, experimenting with real-time animation and music generation, and operation of a number of experimental office information systems. The hardware design was strongly affected by this view of the applications. The design is biased toward interaction with the user, and away from significant numerical processing…’. Those visual and audio aspects of digital technology have now evolved to a point where it feels like we are in a second phase of how text and the computer may work together.

Some aspects of our digital communications are now being handled in non-textual ways. Using voice alone we can create text without a keyboard; we can issue verbal commands to digital technology, biometric data is replacing the signature; Youtube videos are supplanting instruction booklets and some elements of education. Podcasts and audio books are edging into the space where text was used as entertainment. Sometimes, as with an app like Instagram, we communicate by sending pictures only, on other apps we reach out to each other simply by sharing music. As a result of these and similar developments some people have asked if writing has a future? The teaching of joined-up writing has been dropped by some educational authorities (in Finland for instance).

But having considered how we could represent 5000 years of writing in the British Library exhibition one of the abiding impressions is how inventive human communities have been in the range of materials that they have used for communication and record, also how genre representing specific needs have persisted over different technologies and tools. It is in tracking the needs and genre that we can find a connective story of how writing has developed and where it might be going. Crucial also to this task is the perception that writing could be thought of as an ecology of forms, technologies and communities. We use different aspects of the overall ecology to achieve different things. Knowledge and content (though I am suspicious of the disembodied framing that these words encourage) can take on multiple forms. It is not helpful to think only in terms of one technology (the digital) or one totemic object (the book), we need always to be aware of the totality of the fields through which we communicate and find meaning. Of course today and in the future that ‘we’ may also include digital or supplemental entities of various kinds.

Fiona Ross

Future Of Text

The diverse means of textual communication that are currently available and that digital technologies seem to promise for the future were arguably unforeseen by readers a mere half century ago. Over millennia, the visual communication of knowledge and literature that developed variously through epigraphic, chirographic, xylographic, typographic, or lithographic means augmented orality and, in many cases, displaced it. It is worth noting that technologies for the diffusion of texts – enabling the exchange of ideas and dissemination of information – often operated synchronously1 and continue to do so in the twenty-first century albeit in changing relationships. Furthermore, the materiality of texts that has formed an integral part of the generally linear reading process, even if the latter formed part of a performative act, is still valued and valorised in the digital age.

Yet the affordances that the intangible digital texts of this era and beyond provide to the reader have been cited as presaging the demise of the analogue book 2
– just as e-papers have occasioned the decline of printed newspapers at least in the Western world. It is plausibly argued that digital data can enhance comprehension for the reader beyond simulating the printed book. A particular contribution is the creation of hypertext links: ‘its webs of linked lexias, its networks of alternate routes (as opposed to print’s fixed unidirectional page-turning) hypertext presents a radically divergent technology, interactive and polyvocal, favoring a plurality of discourses over definitive utterance and freeing the reader from domination by the author’3. Notwithstanding the development of hypertext fiction, such as Izme Pass by Carolyn Guyer and Martha Petry already in 1991, which provides a non-linear, non-sequential engagement with the text by the reader, who in effect becomes the co-author, most e-books while offering choices of fonts and type-resizing follow the construction of the standard codex form4.

Experimentation with new ways of visualising texts has a long, pre-digital history: non-linear means of reading can already be found in the sixteenth century5 and in even earlier as well as later forms of concrete poetry – from the works of the 2nd century BCE Greek poets to those of the 1950s’ Brazilian poets6 and onwards. Furthermore, dictionaries and directories, amongst other textual genres, have never been designed for linear, sequential, or immersive reading; in analogue form such texts have utilised typographic hierarchy to facilitate navigation and usability.

It is knowledge-based, i.e. non-fiction, electronic texts that perhaps derive the greatest benefit from being transposed to, or generated in, the digital environment; an example is afforded by the British Library’s online catalogue of digitised manuscripts. Here searches yield almost instant results and the entries are tagged with links that provide contextualising metadata. Rapid searchability of web contents and electronic documents is an expectation of today’s users; and the employment of OCR on print-bound texts has increased the corpus of electronic textual resources. The introduction of Web 2.0 has created additional beneficiaries: the users of social media, who can interact, collaborate and generate text-based (and video or audio) content in the form of information exchange, blogs, and websites, tagging their own key words and creating hashtags, etc. Furthermore, the development of variable font technology7 for web design has enabled what has been termed responsive typography to assist designers in creating readable content on differently sized devices.

There are known hindrances that readers encounter with digital texts: links may not work with every browser and can be of restricted functionality, or simply too distracting; annotation is poorly supported; and interactivity is limited. Some of these issues should be resolved for the future, and many are genre specific. With reference to newspapers, Paul Luna has commented: ‘In terms of multimodality, options for readers, and its ability to signal the importance of the topic and the newspaper’s editorial stance, the printed page is much richer’8.

Regrettably, for millions of readers the options for typographic composition, textual transmission and information exchange are meagre, as methods of visual communication have long privileged the Latin script. The use of images of print editions for e-papers is commonplace in many writing systems; and even print editions – which continue to flourish in South Asia – are compromised by the poor availability of high-quality fonts for accurate language representation. Moreover, scripts that rely on contextual forms and re-ordering of graphic elements for rendering syllables frequently become malformed by incompatible layout software. In 2019 vast numbers of social media users still rely on transcription into Latin script for communicating in their language due to the inadequacy of available fonts or the unavailability of a script on a device9. This situation, while gradually improving, urgently needs addressing to realise optimal textual communication in both analogue and digital media for the immediate and distant future. These are allied technologies, which inevitably borrow features from each other10: while digital technology facilitates global access, the analogue book is unparalleled in providing a rich haptic experience as expressed by a book reviewer:

Even before I dived into the long, 728-stanza poem, I spent a few luxurious moments running my hand over the volume’s smooth cerise cover with understated gold lettering, its spine edge with a patterned border; then a few more moments gazing upon the beauty of the font that traced the intricate curves of the Kannada script.11

Reading for joy as well as knowledge should be a rewarding experience for today’s and tomorrow’s generations12.


  1. See Sheldon Pollock, (ed), Literary Cultures in History: Reconstructions from South Asia, (Berkeley: University of California Press, 2003), pp. 21-22.
  2. Leah Price, ‘Dead Again’ August 10, 2012, Sunday Book Review Essay. A version of this article appears in print on August 12, 2012, on Page BR30 of the Sunday Book Review with the headline: Dead Again.
  3. Robert Coover, ‘The End of Books’, The New York Times On The Web, June 21, 1992, p. 1.
  5. With its evident disadvantages: rather than breaking free of the printed page, it displays single pages disallowing the typographic two-page spread; see also Paul Luna, Typography: A Very Short Introduction, (Oxford University Press, 2018), p. 77
  6. As mentioned by Piotr Rypson in his paper at the ‘Future Graphic Language’ symposium, showing text from De const van rhetoriken of Matthijs de Castelein, (Gent, 1555) in ‘chess-board’ form’, 6 December 2019, See Noigandres 4, (São Paulo, 1958).
    See Noigandres 4, (São Paulo, 1958).
  7. OpenType Font Variations, see:
  8. Paul Luna, Typography p.76.
  9. See, Fiona Ross, ‘A pocketful of type: the impact of mobile phones on global type design’, Type, Fall 2019 (4). pp. 29-31.
  10. See Tree of Codes by Jonathan Safran Foer (London, 2010) & N. Katherine Hayles, Writing machines, (Cambridge, Mass. ; London : MIT Press, 2002), p. 33.

Fred Benenson & Tyler Shoemaker

A Font of Fonts

For years the Unicode Consortium advertised itself with a slogan: “When the world wants to talk, it speaks Unicode.” The initial pitch was radical but straightforward. Assign a numeric value, or “code point,” to every character in every writing system in the world, past or present. When a computer transmits text, it should use these values. But let just one standard manage them. Free yourself of this burden so you may read and write whatever you please.

This pitch worked. Since its release in 1991 Unicode has become the lingua franca in global computing. All new internet protocols are now required to use it, and it is commonplace among modern operating systems and computer languages.

But we cannot say whether it is really feasible to “speak” Unicode. When it comes to naming characters, the standard, ironically, is ambiguous at best. For example, the official Unicode name for 👋 is “WAVING HAND SIGN.” But 👋’s code point is “U+1F44B.” Which of these three names do we speak: 👋, WAVING HAND SIGN, or U+1F44B? And how do we speak them? The latter two are a mouthful and the first, 👋, may well be unpronounceable.

This ambiguity is symptomatic of a key distinction in Unicode. For the standard, any one code point represents only the idea of a character, not the graphic form of a character, its “glyph.” In an attempt to remain neutral amid the politics of script design, Unicode refuses to specify what glyphs should look like. It speaks in the abstract with characters it never writes, leaving the work of displaying all 143,869 of them (as of March 2020) to your computer.

You are thus responsible for correctly speaking 👋. The same goes for any other letter, logogram, or symbol, alphabet or abjad. But should you wish to write across scripts in the spirit of Unicode’s original vision, you will encounter a problem: no one font package is large enough to contain the requisite address locations for all the characters that comprise Unicode. While most of the 1,114,112 code points it offers still sit empty, like vacant hotel rooms, even the 143,859 points Unicode has thus far assigned exceed what font packages can store. It is now too large to render into text.

Put another way, the standard has evolved into a paradox: while it mirrors all we write, we cannot write all of it. As if in tacit recognition of this, today the original Unicode slogan seems to have vanished. It no longer appears on Unicode’s website and surfaces only among the archived flotsam of mid-90s web ephemera. Regardless of whether or not the world can speak Unicode, from the vantage of our current text technologies the standard’s universal writing is effectively unwritable.

We think this paradox is both a major obstacle and an opportunity. There are 970,253 unassigned code points remaining in Unicode—what happens when it assigns them? If the standard’s coverage of modern and historic scripts already makes it unwritable, what about new scripts? There is room for them, but will we be able to write them? How will text processing have to change to render all of Unicode now and to render future expansions to the standard?

For that matter, what will these new Unicode characters be?

More emoji is an easy answer—but unimaginative. Today, the Consortium is swamped with proposals for dozens of new emoji a year, and these proposals put the Future Of Text on the line: members’ votes ratify new lexicons. The sheer number of these proposals, however, and the debates they provoke regarding whether emoji constitute a language of their own, tend to divert attention away from the problem at the center of Unicode’s expansion into future script systems: what counts as an encodable character, and how do glyphs express, or even determine this?

While emoji serve as an effective reminder that the standard’s growth is far from over, they currently seem to be the only way to explore how Unicode will do so. But we do not think they should be our sole path toward new forms of semantic expression. Further, we are skeptical about the promise of any such explorations when, at present, they fall primarily into the hands of the Consortium’s voting members, Google, Facebook, Apple, among them—especially given how these companies so fervently capitalize on language use.

Instead, we invite new glyphs that would transform the remaining code space of Unicode into a language lab, a place for experimenting with the Future Of Text. Start out by drawing your own glyphs. Make a font. While you do, consider which of the standard’s characters would serve you best. Are they monoscriptural? Only alphabetic? Perhaps you should branch out beyond your native scriptworld. Maybe you need your own language. If so, make one. Remap your keyboard accordingly. Or perhaps you find the keyboard paradigm limiting. What then? What comes after “typing” text? What follows font packages? How universal can text processing be? Might Unicode include whale songs and bowerbird nests—as well as weaving patterns, rope knots, and dance notation? How about Unicode for alien alphabets?

At present, we primarily understand Unicode to be a method of ensuring computers accurately transmit our messages. But it would be wrong to see the standard as merely that. Unicode is also an open semantic project, and in the future, the code points we will have assigned will tell readers how we once chose to speak. In this sense, the standard is an archive, a permanent ledger, a corpus representing the lexigraphic bones of our communications.

When these bones become fossils, what do we want Unicode to have said about us?

This very essay bears out some of the difficulties of rendering Unicode into a readable form. The Unicode Consortium updated the standard between our drafts. Further updates are sure to come. The brief timeline below is a rough attempt at writing with these changes in mind:

Total Unicode characters:

December 2019 137,929

March 2020 143,859

[Your date here]


Remaining unassigned code points:

December 2019 976,183

March 2020 970,253

[Your date here]


Frode Alexander Hegland

Augmented Reader(ship) & Author(ship)

I believe passionately in the potential of text, which is why I gathered the brilliant minds you will find in the pages of this book. I particularly believe in the potential of powerfully augmented text, and I believe that–in addition to dialogue–it is crucial to build and experience systems to further learn what we can make, very much in effect bookstrapping. As a result I have built the text augmentation tool, ‘Liquid’, word processor ‘Author’ (whose name is a reference and a way of paying respect to Doug Engelbart’s ‘Augment’) and the PDF viewer ‘Reader’.

I work try to augment our interaction with text in documents, as opposed to live-shared or social-media text, because I believe that documents are worth investing in. Writing to publish–to ‘make public’–a document slows down the dialogue and gives the participant tangible artefacts to refer to later, rather than writing into the ether when writing on social media and ever-editable-manuscripts. I hope to furnish authors and readers with the tools to enable a deeper literacy, much like visual designers can through powerful tools like Adobe Photoshop and InDesign as well as Apple Final Cut X.

Please note, the Visual-Meta system, as presented at the end of this book, has been implemented in Author & Reader to augment citation connections and provides the glue between rich authorship and rich readership using the current de-facto interchange format PDF without attempting to create a new document format.


Liquid is a utility a developed quite a while ago. It allows the you to select any text–not only Author or Reader–but in (pretty much) any application and perform a keyboard shortcut which copies the text into the Liquid interface/bar.

Here there are options including Search, References, Convert, Translate and so on. You can choose a command through mousing or use visible keyboard shortcuts, such as ‘r,w’ to look up the text in ‘References/Wikipedia’ or ’s,b’ to ‘Search/Google Books’.

This allows you to perform commands within a second, freeing your curiosities to follow hunches and easily build better understanding from multiple sources.

Please try it. It's hard to explain why it's powerful in writing (oh the irony!), it is much easier to understand it when you try it, which is of course the reason I feel so strongly that we need to develop augmented text tools and see how they actually perform.


Reader is a minimalist PDF viewer designed to complement Author. A brief User Guide as been included at the start of this book. To get to the User Guide you can cmd-(minus) on your keyboard to fold the document.

AUGMENTED Authorship

Author a word processor focused on providing a visually minimalist workspace with powerful interactions to produce thoughtful documents by providing useful views of the information (+ to change quickly and easily between them) and the means to quickly create citations and automatically appending a References section on Export (as well as posting directly to WordPress), reducing the need for clerical work.

Using different views and letting you efficiently toggle between them is something I like to think of as using your occipital lobe to augment our pre-frontal cortex (similar to a computer using the PGU to augment the CPU).

Making best use of modern high-resolution screens start with making them free from anything which is not informationally useful for the user. Therefore most controls in Author are available by hand, including keyboard shortcuts and track-pad gestures, but are presented in the ctrl-click menu with the keyboard shortcut displayed for easy learning. In addition to the Fold, Find and Glossary support shared with Reader, Author also gives you:

My introduction to this book and the appendix on Visual-Meta lays out much of the way I see this future of text. These two pages are what I am doing about it. All the software is for macOS and is available from which also hosts demo videos. I would be grateful for any feedback on the software and this book:



Frode, I’m typing this, poking at tiny iPhone boxes, getting autoincotrr autoincotrrcted autoincorrected akk all the way and thinking, something is broken here - why do I have to type? How can text be at once a free-flowing conduit and a frustrating hindrance? (Don’t tell me about Siri, she needs a hearing aid with my accent). You asked me to write 500 words. Instead, I propose a “metalude” - a one-word park bench to locate somewhere in the middle of your printed tome, where readers can pause, shake a stone from their shoe, hunt for a bathroom, feel their fingertips, be in the unautoincorrected moment - hang the future...


A big, blue, furry T you Touch

A big, green, rubbery E you press to make a sound for your Ears

A big, red, sandpapery X you scratch and sniff for an olfactory Xperience

A big, gold foil-wrapped T you peel back to reveal a shiny brown T that you lick to Taste chocolate

These letters fabricated and hand-applied to the first 25 editions, the remaining editions printed with this instructional text only.

In the digital version, short animations replace these interactions. Note: the only one you can execute satisfactorily is E - such are the limitations of current

digital technology.

I now return your readers to their regularly scheduled Future Of Texting.

Garrett Stewart


Text is always imagining its own supersession. In Orwell’s 1984, the “speakwrite” of voice dictation, rather than material inscription, is on a dystopian continuum with the “telescreen” monitors—in contrast to the creamy virgin pages of an outlawed blank notebook fetishized by the hero. Over half a century later, human language is to be retired altogether, after cryogenesis, by nanotech neurology in Don DeLillo’s Zero K, with the “the rivers” of Hemingway’s sentences streamed into the cortex by electronic wavelets of direct nonlinguistic pulsation.

Lately, the book is even more imperiled than language. Conceptual artist Fiona Banner mocks the dematerialization of the codex under e-marketing protocols with a bound version bearing as title No Image Available (2012). As early as 1997John Roach anticipates by a decade the e-reader in his installation Pageturner, where electric fans propel the pages of a text so fast that, in transfer to a video monitor, they offer only an illegible blur. Two decades later, in 1917 at New York’s Center for Book Arts, Roach curates the ”Internal Machine” exhibit of conceptual bookwork, recalling I.A. Richards remark about books being “machines to think with.”

But in a later show at the Whitney called “Programmable,” Mika Tajima projects text into a nonlinguistic future with a bibliotech displacement beyond that of codex mechanics. Although in considering the destiny of anything, the rear-view mirror of etymology might seem an unlikely resource, not so in Tajima’s case. The origin of text in texture is inescapable in the shuttled fabric of her wall weaving once the nature of its input is disclosed. The lexical backstory enhances the technological one. Behind the intermediate Latin, textus for “style” or “texture” in writing, lies its original sense in the verb “to weave,” textere. (Contrast here the contemporary sense of the verb to text in its remove from complex manual fabrication to the evolutionary precondition of opposable thumbs.)

In Tajima’s tandem installation, under the title Negative Entropy, her abstract weaving is based on the original punch cards of the Jacquard loom, renowned forerunner of computer technology—coded here to the audial wavelength of industrial sounds (rather than transmitted electronic text) at a mainframe computer hub. Opposite this wallwork, under a typical artist’s book vitrine, is a spiral binder displaying such actual punch cards—having been used in this case to record the noise, not optical patterns, of contemporary power looms—interleaved with pages of strictly visual design. These latter—“translated” from this second industrial site, textile factory rather than data center—are audial traces that have passed through a voice-recognition software to generate their further spectrographic “read out” as braided chromatic strands. Weaving confronts codex, tapestry refigures text, binary impress on cardboard faces off the oblique results of digitized speech recognition—all in ways so reciprocally estranging as to suggest a dialectical resolution at some abstract vanishing point. In any such standoff of mediations, at least the trope of the text, if not always its inherited linguistic texture, seems here to stay.

Günter Khyo

Tabula Rasa

Few people have the gift to look far into the future and to anticipate the challenges and opportunities that lie ahead of us. I owe a lot to great thinkers and visionaries like Vannevar Bush, Doug Engelbart, Ted Nelson and Alan Kay who allowed me to see the world with different eyes and whose writings and life-works keep inspiring my daily life. It takes great courage and dedication to pursue a vision of a better future, for the majority will not see its immediate value and stand against it:

“The reformer has enemies in all those who profit by the old order, and only lukewarm defenders in all those who would profit by the new order, this lukewarmness arising partly from fear of their adversaries, who have the laws in their favour, and partly from the incredulity of mankind, who do not truly believe in anything new until they have had actual experience of it.” (Machiavelli, The Prince)

Doug Engelbart devoted his entire life to the betterment of mankind. Despite suffering countless setbacks and disappointments, he staid true to his vision of augmenting human intellect to address the world’s most urgent problems. I can think of no nobler quest. And yet, during my computer science studies, I can remember only one history lecture in which his name was mentioned briefly; his contributions reduced to a footnote in the turbulent and forgetful history of computing. Fortunately, in every new generation, there are people like Frode who care deeply and keep the flame.

As for the future of text, I find myself walking in circles, pondering over my cluttered notes, my thoughts restlessly jumping from topic to topic. In a sense, everything about the future of text has already been written. Many of the exciting promises of (hyper)text and computing are still waiting to be rediscovered and fulfilled: knowledge workers blazing their trails through the masses of records, making connections between the seemingly unrelated, poets weaving stories in Xanaspace with its magnificently beaming links and researchers drawing interactive maps in networked information spaces. Within the blink of an eye, information about any topic can be consolidated from all imaginable sources, viewed and re-arranged in any way the heart desires, enriched with interactive computer models that are constructed on the fly.

As for the present, I think we are nowhere near that kind of interactivity. Suppose you buy a brand new computer with a pre-installed operating system – without a web browser and without access to an app-store. What can you do with it? How does the computer help you in organizing your notes? Can you attach pictures to your notes, draw doodles on them? Does your computer come with a word processor? If so, can you change the way your word processor works? Suppose a friendly neighbor lends you his copy of “ComputerLib/Dream Machines”. Can you transform your computer into the literary machine of your dreams? Can you at least improve the clipboard such that it holds more than one item and make its contents visible? Can you show me what has been on your desktop one month or even one day ago? And have you ever wondered why your desktop cannot stretch beyond your screen? Your brand new computer system can process billions of records per second, render near photo-realistic interactive real-time graphics, but it is not yours command – unless you are an expert.

Businesses seek to fill the void by offering applications and services that compensate for the lack of a better systems organization. Especially since the advent of the modern web, there is certainly no shortage of instantly available software ranging from simple time trackers, citation managers, search engines to collaborative office suites. As corporations take footholds in our information spaces, more and more people are worrying about issues of privacy and ownership. But I argue that it is not the big corporations that are the issue. It is the notion of software building that needs to be questioned. Instead of asking what the user needs, and coming up with a solution to a generalized problem, we should ask ourselves how we can enable users to build their own tools and models. We have to realize that we cannot fully anticipate what a user might need, she might be a brilliant writer and imagine ingenious new ways to interact with text. Or she might want to make small adjustments as seemingly trivial as re-arranging or recoloring controls – and who are we to decide where the controls should be or how they should be accessed?

The future of text cannot be and should not be entrapped into the narrow confines of applications. The real world is not a messy patch-work of applications. I do not have to install an application to read a paper or draw a sketch on it. Our minds are not databases that store and retrieve facts; information is not a commodity, it is in the eye of the beholder where it can take arbitrary shapes, can be interacted with and seen from infinitely many perspectives. If the computer and text systems of the future should serve as an extension of the human mind, then we have to take a step back and realize that we are still at the very early beginning of computing. Instead of bogging down future generations of computer users and scientists with endless minutia of technical procedures and terminology – all arbitrary and changing every year – we should give them the time to exercise their imagination, to think about world building, to think about what they need. We have to provide a stable, coherent environment that can be personalized and adapted whenever the need arises. An environment that can be fully explored without artificial barriers and thoughts can be exchanged without intermediaries. Text is a wonderful medium for this kind of exploration and thankfully, it does not need a computer.

Gyuri Lajos

The Future Of Text Is In Our Past

As Wendy Hall remarked at the 2017 Future Of Text Symposium, Ted Nelson was right: we need two-way links, transclusion, and micropayments. In his vision “Everything is deeply intertwingled”, or as Charles Eames put it “eventually everything connects” and “the quality of the connections is the key to quality, per se”. Two-way links connect people into gossip networks such as Scuttlebutt, and enable us to follow incoming as well as outgoing trails of information. Via peer to peer conversations, decentralized interest based social networks can be formed. Two-way linked hypertext consists of identifiable “nodes” (we call them dots) forming complex networks of interconnected elements. Ted envisaged networks of nodes (graphs) being transcluded from one context into another, transformed to fit new contexts. The description of two-way links in meta level graphs captures the intent behind the creation of connections between things; types and roles are thereby made “cognitive”. The two-way linked nodes span a “Knowledge Graph” created primarily for humans as “Semantically Linked Texts”, we call MindGraphs. They capture semantics much like Linked Data does for machines.

Ted envisioned a system where all contents belong to their authors, and authors can be rewarded by those who link to or transclude their content through micro-payments. As Brewster Kahle puts it, this requires us to bake “freedom of expression, privacy and universal access to all knowledge, as core values ... into our code”, i.e. we need to re-decentralize the Web by developing new protocols and tools using cryptography for Web 3.0 and make micropayments part and parcel of knowledge exchange. This confirms that Ted Nelson was right to claim at the 1997 WWW conference “Your future is my past”.

Web annotations were built into the first browser for the Web, but they were switched off. In the past five years has pioneered the re-making of web annotations. Links from MindGraph to Hypothesis allow annotators to own their annotations while gaining access to the conversations facilitated by hypothesis. Links to MindGraph from annotations lets other annotators become aware of relevant content created in MindGraph as HyperMapped contexts, allowing comprehension at a glance.

Using its built in WikiData Explorer, MindGraph lets users anchor their knowledge in entities drawn from WikiData. Having a bridge to Linked Data facilitates “Weaving a Decentralized Semantic Web of (Personal) Knowledge”. MindGraph relies on OrbitDB’s peer to peer database technology. OrbitDB is built on top of IPFS (the InterPlanetary File System). IPFS is a new peer-to-peer hypermedia protocol powering the decentralized web. Our technology relies on open source decentralized protocols and capabilities which enable us to make sense of the web as an extension of our minds. We provide the means to organize all annotated knowledge into an emergent decentralized (inter) personal knowledge graph.

The next challenge is how can we make the knowledge that our tools help to create and organize “universally accessible and useful”. The Web is built with common standards. This is its greatest value. But as an application platform it is riddled with accidental complications.

The browser should have been built “more like an OS kernel than an app” [Alan Kay: “Normal Considered Harmful”]. By a Kernel we mean a self contained bundle of minimal viable capabilities which can be extended and rebuilt as technology changes or new capabilities are needed. A Kernel must provide a stable platform for applications built over it. The kernel that supported the “software internet” of interoperable applications at Xerox PARC has been ported to run in today’s browsers, even though the applications were coded 40 years ago! Engelbart and his team built NLS with a small kernel. When they got a new machine they only had to change the kernel for everything they developed on top of it to work. Kernel based development affords permanence to capabilities. This is key to building a Permanent Web.

Holochain, InfoCentral, Haja Networks’ Ambient Protocol, all implicitly explore the design space for building a software kernel for the Web. Tim Berners Lee’s SoLiD (Socially Linked Data) proposes RDF as a universal semantic data format. It promotes data ownership by decoupling storage from applications, so users have the freedom to choose where their content resides. MindGraph is a meta-circular universal extensible graph format which goes “Beyond Ontologies”: instead of a single version of truth, it captures situated, personal, unreliable, and changing information. All applications built with MindGraph are born interoperable, and the information and capabilities needed to manage it are user extensible, meta-designable, and co-evolvable with a bootstrappable kernel we call TrailHub. We are building TrailMarks, our first minimal viable Knowledge Augmentation Engine, using the TrailHub kernel. We use TrailMarks for collaborative web research, documentation, design, project planning, and tracking. It runs in the browser, or as a self owned server. It already delivers 80% of what we need.

In our “Back to the Future” project we follow in the footsteps of those pioneers who predicted the future by inventing it. We facilitate decentralized meta-level conversations about the means by which knowledge is created and organized. Using TrailHub we can form decentralized “improvement communities” that co-evolve the tools which fulfill Engelbart’s vision of augmenting “the intellectual effectiveness of the individual”. We provide “Augmented Authoring” capabilities that transcend the limitations of imitating paper. TrailMarks re-conceptualizes collective intelligence as decentralized emergent conversations. Everything in MindGraph is anchored in (inter) personal federated HyperKnowledge Graphs and Linked Open Data. We aim to combine them to form a global, emergent, self-organizing decentralized “Conceptipedia”. Like Ted and Doug we support “deep rearrangability” of content by “keeping links outside the file”. Following Alan Kay TrailHub is built as a software kernel for the Web. Like the pioneers we care about structure, collaboration, openness, tinkerability, and user autonomy first; not closed ecosystems which emphasize ease of use over empowerment. TrailMarks started out as an antidote to information overload. Now, bootstrapped with TrailHub, it facilitates conversations leading to emergent collective intelligence, allowing us to co-evolve the tinkerable software needed to manage it.

Harold Thimbleby

Intentionally Parallel Text

Ever since it was invented thousands of years ago, text has been WYSIWYHG [1]; it served to get ideas out of our heads into the physical world, where the ideas stay “written in stone,” passively unchanging. Writing created rigid scriptures that could not be easily challenged. Computers disrupt this conventional view: now text can be automatically and easily interpreted and reinterpreted. Software itself is an obvious example of the new potential for active text: computer programs are texts are not just for humans to read, but they are intended to make computers do things, from animate embedded graphics to configure smart contracts. Software itself combines different sorts of texts: program code, comments, and explicit metadata written in the text — like copyright, version numbers, and authors.

In programming languages, notably LISP, the program itself can change the role of text, using constructs such as quote, eval and macros. Some languages, such as TeX, are designed for creating text and en route can do arbitrary computation. Ideas such as page numbers, tables of contents, and cross-references are the results of computations applied in the text itself — they are aids to the reader, historically done manually, but now by computer. For example, in TeX, we can write “\the\year” and the reader of the text will either see that or 2019 (that being the year we typeset this text). TeX thus allows a single text to have multiple views. Another example: in Microsoft Word I can write banana, and in my view that is bananna (i.e., underlined) but in your view when you read it (right now) the word is just bananna (i.e., incorrectly spelt). In fact, to be clear, my view has four banannas, all underlined, but your view has four banannas only one of which is underlined. Word adds views like outline, form, galley, track changes, and split views: these are different views of the same text, each generated to make certain human applications easier. Hypertext transforms our relationship with text [2], and it does this primarily through active links in the text. Wikis, HTML with embedded CSS and JavaScript, multi-author web editors with multiple views, have brought such powerful concepts, even if not recognized as such, into the mainstream.

This new flexibility of text has crept up on us. We often think computers just give us more flexibility or higher quality results or cheaper processes, rather than seeing how underlying assumptions are changing. We occasionally notice how old ideas, such as copyright and signatures, become transformed into more sophisticated computational ideas, like digital rights management (DRM), but the basic goal of text-as-such remains unchanged.

Computers, however, permit far more. It is worth inventing a new phrase so we can have a discussion about the disruptive concept. “Parallel text” is suggested here; it is provocative yet sufficiently suggestive.

A parallel text is designed to be rendered, edited and computed on in multiple ways for multiple purposes. Critical to being a parallel text is that there is a single conceptual object (which may be composed of sub-objects, such as a sequence of chapters, a set of modules, or texts within a hypertext) and a set of relations that project the object to separate, essentially parallel, texts. In many cases, the parallel texts can be composed and recombined. Editing or updating a parallel text can (but need not) update the source of the parallel text in the original text — this is a property called equal opportunity [3], since the original text and the parallel text have equal opportunity in any editorial process.

Many applications of parallel text are immediately obvious, feel natural, and are often already well-known under other terms. For example, presentation programs provide parallel views (PowerPoint provides 10 that I counted) of the same text. But this is not to say parallel text should be dismissed as just a new name for a routine concept; parallel text as an intentional device opens up many new possibilities. In particular, when it is done intentionally, it can reduce error and increase accessibility.

Here are a few pointers to new ideas: reproducible peer reviewed papers [4]; regulation of medical devices [5]; and reliable user manuals [6]. Parallel text is a new focus to help transform blogs, podcasts, social media, word processors, and much more — but that is a bigger story for the Future Of Text.


  1. H. Thimbleby, “What you see is what you have got,” First German ACM Conference on Software Ergonomics, Proceedings ACM German Chapter, 14:70–84, 1983.
  2. T. Nelson, Computer Lib/Dream Machines, 1974.
  3. C. Runciman & H. Thimbleby, “Equal Opportunity Interactive Systems,” International Journal of Man-Machine Studies, 25(4):439–451, 1986. DOI 10.1016/S0020-7373(86)80070-0
  4. H. Thimbleby & D. Williams, “A tool for publishing reproducible algorithms & A reproducible, elegant algorithm for sequential experiments,’’ Science of Computer Programming, 156:45–67, 2018. DOI 10.1016/j.scico.2017.12.010
  5. H. Thimbleby, Achieving the Digital Promise, Oxford University Press, 2020.
  6. H. Thimbleby, Press On, MIT Press, 2007.

Howard Oakley

Richer Text

More than a generation of development of text composition, browsing, and study on computers has brought disappointingly little advance from traditional black ink printed onto sheets of paper. This is most probably because profits to be made from text are limited: over the same period, photography has switched from still images on film to high definition movies, in order to sell highly profitable consumer products like iPhones.

There remain many under-explored ways to make computer text richer, by making better use of computer and device displays. A simple example is provision of multiple views of the same document; split windows are available in better applications, but hardly any let you view different sections of the same document in different formats, This makes it easier to refer to endnotes, an appendix, and a full spread of the body of a document all at the same time. That is physically impossible with a printed book, easily implemented in a PDF reader, but so seldom offered.

More complex are ways to supplement single text views, through accessory windows as in Liquid | Author, or fully interleaved text. Even when you are not fluent in another language, ready access to the original version when reading a translation can be invaluable, and is of cultural importance too. Although it is easy to adapt texts made freely available by Project Gutenberg to parallel display in columns or pages, true interlinear text is only available in a few specialist applications. This can also allow the reader to make interlinear annotations, for example.

Previous implementations of interlinear text on computers have relied on extensive markup or support for sophisticated page layout. For users to assemble freely available plain text files into personal interlinear texts requires applications that can parse normal punctuation, such as the conventions that each line of poetry is terminated by a line ending, and that stanzas are separated by a single blank line. This isn’t difficult to incorporate into a Rich Text editor, for instance, and transforms it into a platform for connecting free texts into interlinear format. Readers can then study, for example, a non-English original together with literal and more literary translations into English, a facility very seldom available in printed versions of the text.

Another worthwhile aim for improved text platforms is giving the reader greater control over their reading environment. This can be achieved with custom appearances such as Dark Mode, but two leading formats for reading text, HTML and PDF, still lack support for such appearances because they were standardised so long ago. More flexible formats such as Rich Text (RTF) remain poorly supported in this respect.

The goal for modern text applications should be to build on the strengths of printed text without being bound by its physical limitations, so as to transform computer text as radically as has happened in photography.

Howard Rheingold

Language As Trance

I would say that words, from the spoken to the written, change thought, by reducing a far more complex universe to that which can be funnelled into language. The first verse of the Tao te Ching. “The Tao that can be put into words is not the true Tao.” This, to me, was the fundamental lesson of psychedelics:

Language is a marvelous tool for understanding and manipulating the universe by throwing a kind of grid or map over a highly complex cosmos. That grid or map can be use to navigate, understand, and manipulate. But language entrances us into believing and acting as if the universe is as simple as language can convey. Psychedelic experience brings one into direct contact with the complexities that can’t be shoehorned into words.

Ian Cooke

Future Of Text In Libraries: A National Library Perspective

The British Library’s collections span the world, encompassing a multitude of languages, subjects and formats of publication. Text appears as print on paper; manuscript and calligraphy on parchment, vellum, leaves and bone; or embodied in spoken words recorded across a range of media. More recently, the Library has begun collecting text in digital formats, from pdf files, through EPUB and in the rich variety of expression on the web.

Collecting of born digital text is largely within the context of Legal Deposit, which provides both a mandate and a legal framework to enable the creation of a digital collection. This also creates a territorial focus for collecting activity, aligning with the Library’s special role, as a National Library, in reflecting the history of published communication in the UK.

Legal deposit implies comprehensiveness, and we have always tried to collect publications from outside mainstream production and distribution methods, as well as showing the development of new types of publishing and circulation. For digital text, our efforts began almost unconsciously through the deposit of magazines with tapes and disks attached. Today, we are actively exploring the ways in which digital production encourages new forms of creativity, and how we can respond as a Library – with our objectives for long-term preservation, documentation and access.

In our exploration of ‘emerging formats’, we have chosen to focus on interactive texts and text produced for use on mobile devices. These categories blend in to each other, and we can see common practices and challenges for collection management.

Interactive texts provide a narrative that is influenced by the way in which a reader engages with it. The reader may be an objective observer, making choices that are appropriate to their experience and needs. Or, the reader may choose to adopt a character within the narrative, becoming part of a story. Sometimes, one may become the other, and the reader and narrative may be influenced by others participating at the same time.

Interaction need not be through conscious choices made by the reader. The narrative may be influenced by data gathered by the device that the reader is using. Kate Pullinger’s Breathe responds to location and weather data gathered by the reader’s device. The children’s story apps Jack and the Beanstalk and Goldilocks and Little Bear make use of camera and gyroscope data to create a physically, as well as visually, stimulating experience – calling to mind the physical playfulness of printed children’s books.

This playfulness, combined with the digital environment within which the text is experienced, reminds us of games. Sometimes, a text is mistaken for a game, or is unsure whether it is a game or a book, or we mistake a game for a book. This blurring of boundaries is a feature of the evolving use of text, but does create challenges for a Library which is perceived as “about books not games”.

Further blurring of boundaries exists when considering territoriality. Legal deposit enables the Library to collect UK publications. However, text on the web can be co-created by authors working in many countries, and hosted on platforms that are presented as trans-national.

All of which suggests collaboration as a way of managing complexity. Collaboration is not new, but Libraries, Galleries, Museums and Archives realise that they have shared challenges across their collections, and there are opportunities to work together to build the new capabilities and skills that are needed to manage these new collections.

Another important way to resolve some of these complexities is to consider the objectives and needs of those using the collections. At the British Library, we have been talking to creators, researchers and curators to understand better what might be required of libraries in managing and providing access to these emerging texts. The “Library of the (near)Future” would need to provide or enable meaningful use of texts, through ensuring authenticity, confidence and preservation of context as well as the works themselves.

Creators want reassurance that their work will be preserved as close to their original intent. Readers want to know that the text that they see is the same as the text as originally experienced. Where there is difference or loss, this needs to be documented and understood.

Authenticity and confidence should be supported by attention to the context in which a work was produced and made available. This includes a technological context, covering aspects such as dependencies on software, firmware and hardware. Context might also include how the environment within which a work was made influenced its construction and creative expression.

This also suggests that the library of the future will be supported by an ecology of rights that allows for access to the systems and software, or emulators based on them, that enable authentic and context-specific use of texts. Open approaches are vital, but this will also require understanding and engagement more widely within industries.

The knowledge and skills possessed by the people working in libraries will evolve too, to reflect both the technical challenges associated with collecting and managing complex digital text, and also the new ways in which readers will want to access and analyse those texts.

Finally, libraries will continue to provide places for discovery and interaction with these new and evolving texts. This may need opportunities to talk to subject specialists, access to specific hardware or technology and also new types of spaces that support and encourage interaction with works that create noise, or require movement. Libraries will continue to provide spaces for inspiration, creativity, collaboration, wonder and joy in the texts of the future.

Iian Neill

Codex Futura

Relations are not edges in a graph database. Or foreign keys in a SQL database. Or references between elements in an XML document.

This is a degradation of their rightful place in human thought.

A relation is, according to the artist and philosopher Vernon Blake (1875-1930), the result of the coordination of two or more elements in such a way that it can be considered a new entity.

A relation can be physical, like the entity of water, which is the result of a coordination (not the sum) of hydrogen and oxygen.

Or a relation can be perceptual, such as the colour-idea of a Monet painting which forms in the mind of the viewer when they see, understand, and remember the relations established between its parts, its brush marks and touches of coloured paint.

For Blake, an artist is a being capable of both “perceiving with more or less accuracy the nature of the interrelations of the universe” as well as constructing out of the materials of their art “a series of relations of analogous quality”.

This idea of relation as a fundamental quality of perception, thought, and expression, is the organising principle behind Codex.

As a technical product, Codex is a text annotation environment which uses standoff property annotation to generate entities in a graph meta-model in the Neo4j database. Because standoff properties are stored externally to the text stream, they can freely intersect and overlap; whether it is a single user layering annotations over time; or a team contributing to a shared model; or NLP and computational linguistic services generating layers of analysis. Because standoff relies on fixed character positions it is typically reserved for use on immutable texts which have been proof-read to a high standard. However, Codex employs a custom text editor which permits changes to the text stream in real-time in a WYSIWYG interface, making standoff annotation a practical option for mutable texts.

Annotations in Codex aren’t stored as markup in an XML file: each standoff property is converted to a node in Neo4j, and linked to both its text of origin and to any entities in the system it may reference. (Style and layout is also treated as annotation in this way.) Any named thing in a text may be considered an “entity” in the Codex parlance; whether a person, place, group, phenomenon, concept, and so on. Entities are defined by “statements” made about them by witnesses in the text. These statements resemble RDF triples in that they represent a context between one or more entities and a “concept”. In Neo4j this statement takes the form of a hyper-node structure. Each entity relates to the core concept with preposition-like roles, like “at”, “with”, “under”, “to” as well as predicate relations like “subject” and “object”. Each statement can be traced back to a witness (the entity the statement is attributed to) and the region of text from which it was derived (precise to the character). This simple structure can express events, relationships (in the regular use of the term), and ontological assertions.

Codex thus allows the user to annotate text with freedom, to collaborate with others on modelling documents, and to make use of Neo4j to identify connections between entities across hundreds of texts. When a syntactical layer is generated for a text, it is trivial to query the database to find overlaps between syntax and semantics; not just inside one document, but across all the documents in a text corpus. The graph can also be leveraged to discover textual connections between entities. To give an example, in the Michelangelo Letters corpus in Codex, one can find any sentence in any text which mentions Michelangelo along with anyone he is related to, such as his brothers, his patrons, his friends, etc.; and in each sentence returned the matching textual mentions of these entities are coloured-coded for easy reference.

This integration of text, standoff annotation, and graph and other features is not an end in itself, however. The goal of Codex is to attempt to provide a “speculative instrument” in the sense suggested by literary theorist I. A. Richards (1893-1979): a tool for thinking; a series of lenses to examine the relationship between words and ideas; and an apparatus for both modelling and controlling ambiguity.

The dream (which may never be achieved) is that Codex can be used not only to model implicit and explicit relations in a text, and across corpuses, and domains composed of many corpuses, but to give the user a different kind of instrument to express the relations which compose their own thoughts and feelings.

Related material

Jack Park

On Well-Connected Human Thoughts

When the British actor Robert Morley concluded a British Airways television commercial with the now famous line Come home America. All is forgiven, he was engaging an American’s vast collections of stories we had encountered, stories which began with us tossing a hissy fit over taxation without representation, ending, with the Revolutionary War, where we stopped being colonies, and started an experiment in Democracy. Storytelling is what humans do, starting with an unknown collection of vocal utterances and, more than likely, kinesthetic motions and gestures, later to be recorded in the cave drawings found in various places around the world. Today, text in every language records our thoughts, the stories we tell around campfires and in classrooms; those, coupled with art in the form of paintings, photographs, dance, and films are our primary storytelling modalities.

Text, when you think about it, is a relatively recent technology invented by humans. Text coupled with drawings (later, images) allowed humans to advance well beyond the vocalizations they must have used when they first discovered, for instance, that tossing a spear in front of a running animal was a more accurate way to toss spears. Today, that’s a few words and a simple diagram. But human life today is marked with vastly more complex issues than those experienced by spear-chuckers; we now must record the many ways we detect cancers, predict future events, and even compose epic sonnets around people, places, and events.

Through my lens, text, and storytelling, are about learning and problem solving. My views, through that lens, exist in the context of these opens: open science, open source, and open access. In that context, text, and stories recorded in text, serve the joint purposes of organizing human thought, federating thoughts around the subjects in play, and discovery, known as literature-based discovery of the many those thoughts which are not yet but should be connected.

From that, the Future Of Text, as I see it, is one which promotes well connected human thoughts, along with the heritage of those thoughts.

Jakob Voß

Document Models

Since its migration to the realm of data, text is eventually expressed, stored, and processed as a sequence of bit values. For historical reasons, bits are grouped in bytes of eight. For some years and regions, these bytes corresponded to basic elements of text but these days of typewriter mimicry are gone. So where’s the text in our ubiquitous streams of zeroes and ones?

The current answer at least on the lowest level is surely based on Unicode: this international standard defines a set of characters from most writing systems known today and it rules how to encode them in data. Despite its omnipresence, a definition of digital text with Unicode has drawbacks: first, it limits the set of textual characters to those defined in the Unicode standard. Neither Klingon nor the scribbled subtitle of Art Spiegelman’s autobiography Breakdowns count as text in terms of Unicode. Second, the specification can only express sequences of characters when text is much more than this. So what is text when expressed in data?

Text consists of elements and properties such as characters, words, markup, segmentation, and layout. Opinions differ about which parts count as relevant content but at least these core elements must be defined unambiguously. The method to express elements in data is called data formats or document formats. Each document format comes with an implicit or explicit document model that maps textual properties to elements of digital code. Each model comes with a simplified, opinionated view of textual reality. This limitation allows us to store text with limited sets of data elements but no model and no format can express all kinds of texts.

As shown by Unicode for character data it is worth to look for universal document models nevertheless — one should only be aware that document formats will never be complete as people continuously come up with new methods to articulate meaning in text.

James Baker

Mass Observing Fears For The Future Of Text

Since the early 1990s the anthropological organisation The Mass Observation Project has periodically asked its ‘Observers’ - a cohort of hundreds of volunteer writers - about the impact of information technology on how they write, read, and communicate. In 2004, Mass Observation Observer W1813 wrote:

> I confess that sometimes I resort to using the computer using cut and paste techniques to write several letters at once.

Confessions shouldn’t be taken lightly. They evoke guilt, remorse, disquiet. The question is how did a person get to this point? To the anxiety - nagging if not existential - that their writing was made impersonal by computing?

The mid-1990s saw a profound change in how people in the Global North interacted with computing. In the UK, where the Mass Observers lived, all manner of activities were consolidated around Windows-like computing interfaces, blurring the boundaries between work and home, labour and leisure. And so when in this period the Mass Observers talked about writing, they talked about change. In particular, they worried that they had lost something. And whilst individuals were rarely able to fully articulate what that loss was, taken as a cohort patterns of worry emerge: about losing agency over the production of writing, of their writing losing personality, of being unable to adequately preserve the texts they created and received. But fear isn’t the whole story. Because the Mass Observers tell us that before the fear came the dreams of the born digital text, visions of slicker, more efficient, and more precise textual production. Mass Observers tell us that computers made them better writers, and as a cohort - for all their fears - they embraced this new technological mediation: from the mid-1990s onwards there was a gradual decline in the number of Observers who wrote to Mass Observation using a pen or a typewriter, and a concomitant growth in those who produced their observations using a word processor.

These phenomena are historically specific. They are grounded in an infrastructure of textual production which is lost: clacky keyboards, sticky mice, flickering monitors. When we open these historical texts on a modern computer, they display an uncanny similarity to our own texts. Performed by modern software, the historical text appears timeless, unbounded. Perhaps the uncanny is the root of the anxieties Mass Observers experienced: that producing a word processed text felt like a portal to nowhere, that shorn of the architecture of print, not only was the author deader than ever, but time was dead as well.

Reading the anxieties of the Mass Observers provides a blueprint for putting time back into the born digital text. By choosing to open an old text using old software and hardware, we can start to replicate the temporal feel we get from the architecture of print. And by analysing auto-save fragments and undeleted sectors we can begin to reconstruct the creation of a text, we can use digital forensics to find and understand those guilty cuts and pastes. The problem is that neither of these approaches are adequate dreams for the Future Of Text, for they lumber text with the architecture of textual production, denying it the lightness it deserves. Perhaps instead, as we look to the Future Of Text, we should simply keep in mind - as the Mass Observers tell us - that humans crave context. In turn, a dream for the Future Of Text might then do well to root itself in - without lumbering itself with - that historically specific desire.

*My reflections on this subject are indebted to David Geiringer, Thorsten Ries, and Rebecca Wright for collaboration, conversation, and inspiration on.*

James Blustein

Text As Process

I come to the question of the Future Of Text as a hypertext researcher with a particular interest in human factors.  I take my inspiration from Doug Engelbart who, along with his many ground-breaking achievements, espoused a vision of augmented human intelligence.  Although my research includes augmented reality (AR) and virtual reality (VR), I am considering text as written words.  My research focus has been mainly on discursive text in contrast to poetry and other literary forms.  The texts that this essay is concerned with have words which are connected statements representing ideas, unlike word clouds for example.

Aware that this volume will be preserved in a static format I have striven not to employ many dynamic textual features (such as hyperlinks and stretchtext), although I do use html’s summary/detail disclosure element as an example but only once.


Text is important — it has a function and a value that are part of the ways that ideas are developed and transmitted.  In the present, technology supports search, breadcrumb trails for navigation, visual representations of structure etc.  I imagine that the Future Of Text is text that is presented by computational machinery.  I do not suggest that the computing machinery will be recognizable to us, nor will it be under glass as Nelson has described uneditable text such as much of the WWW.  A Future Of Text augmented by computational machinery means that the possibilities that are attendant in text will be manifest and magnified by the application of additional technology.  In my optimistic view of the future one of the benefits or features of computational machinery is to make possible and manifest the connexions between and amongst ideas and to map new pathways to formalize them to connect them, to map new pathways, to inject oneself as an author, to circulate those ideas and connexions because with text available through a distributed open network (e.g. the www) every person has the (theoretical) potential to be an author, a reader and a publisher.  Although all of this has been possible since written text, the widespread adoption of computing machinery (particularly networks), expands the scholarly potential.

My response to the question of what is the Future Of Text has been influenced by many sources.  At this draft stage I do not intend to list most of them.  I am keenly aware of being influenced by Markson, in particular the plot-less novel Vanishing Point (2004), and many colleagues in the communities of hypertext authors and researchers.


I am particularly interested in text as palimpsest. As Blustein et al. (2011) wrote,

Notions of permanence attached to the written word are thought of as fetish; palimpsests (literally the residuum of erased text on parchment, metaphorically textual edits thought of as obscured in a final draft) are now marked by digital traces and tags.  Accordingly, the ways that readers can mark their unique engagement and strategies … are changing.

I consider the Future Of Text in two frames: as a reader (or person who experiences) text, and as an author of text.  We must recognize that these categories are fluid and often overlap.  In the simple case, overlap will occur as readers alter original texts (by annotation understood broadly) and thus become authors.  More deeply, the techniques and tools with traditional and designed uses are often subverted or appropriated.  In print, Nabokov’s (1962) Pale Fire is a familiar example of a text that appears to be of only one type but is ostensibly of another.  Larsen used the path name feature of the Storyspace hypertext publishing software as a space for poetry in Samplers: Nine Vicious Little Hypertexts (2001).  The path name feature was originally intended for documentation and reader guidance; in Larsen’s text, each path name entry is a line in a poem that the reader can see when they look at the list. 

I am consciously not addressing the rôle of publisher in part because, here too, the question of territory is complex.

Spatial hypertext

Although I currently favour spatial hypertext for activities that are done individually, e.g. writing, brainstorming, and information triage, my focus in this Chapter is on more conventional forms of text.

As an author/creator I imagine that in the future authors will use a framework to describe how their text will be presented.  That framework would contain not only the words of the text but also suggestions for the layout (in space and time) of the text.  Texts could be presented in forms resembling op art for example.  All of the techniques available to readers (see below) could, of course, be incorporated into an author’s writing.

As a reader/consumer

I would like to see a future in which readers can freely annotate texts to make the texts their own without concern that the original text or their changes would ever be lost.  For text not to be lost it must be recognised when it is found.

The 7 Issues

Of Halasz’s (1988) 7 Issues for hypertext the most pertinent for this Chapter’s vision are: tailorability and extension, versioning and transclusion and collaboration support. Obviously, versioning is particularly important when texts can be changed.  The activities of updating and extending texts encompass readers adding their own notes and hyperlinks (internally- and externally-pointing) to enhance texts, and of authors correcting or expanding their texts.  Transclusion is important in two ways: to support stretchtext (described below) and as the best way to support users creating their own documents by combining parts of others (akin to Victorian commonplace books).


Blustein & Noor (2004) discuss glossaries as both ancillary materials to author’s texts and as stand-alone records of readers’ notes.  Those author’s classify glossaries in four dimensions, two of which are relevant to the Future Of Text are: flexibility of location — in a single document or potentially available in many documents; and user — for use only by one reader or to be shared by multiple people, even if only one of them can alter the content.  That vision of glossaries will be stronger with transclusion to support the integration of snippets (i.e. segments of text that copied from other texts, not to be confused with lexia which are units of reading established by authors).

Such glossaries are works composed by readers using, and augmenting, text written by others.  My vision of the Future Of Text firmly includes such hypertextual works.

Links and advanced organizers

A problem with most of today’s (2019’s) HTML-based online texts is that they necessarily use www browsers’ default link-following behaviour which can be attention ripping: when links are followed the entire visual context of the head (outgoing part) of the link is removed and replaced.  This behaviour makes it difficult for readers as it reduces the coherence of what Kintsch and van Dijk call the surface (or most basic level) of the text.  Ted Nelson has suggested using document browsers that can display a universe of documents at once by allowing users to zoom in and out and pan to see the text and overlap documents in myriad ways.  Concerns about overwhelming readers’ attentions in such interfaces makes me seek better solutions.  Stretchtext (introduced by Nelson (1990); demonstrated by Fagerjord (2005) inter alia) and other types of fluid links (as invented by Zelwegger et al., 1998) are ways I imagine these problems will be alleviated in the future. 

Stretchfilm (Fagerjord 2005) is surface text that expands to include more surface text, similar to how today’s (2019’s) summary/detail operates in HTML: the summary is always shown and the detail is shown only when activated, e.g. by being clicked like a link.  Zelwegger et al. (1998) demonstrated multiple types of fluid text but their intention was the same with them all: to provide the reader with an advance organizer* to provide information about what will be found at the destination of the link.  All of the fluid links in the 1998 and 1999 articles act as ways to inform the reader of what is at the destination of the link (or in some cases to provide dynamic content in place of a traversal link). One class of fluid link acts like stretchfilm, another uses the margin of the display to show the additional information.

*I am indebted to Ruud van Meer for introducing me to the concept of advance organizer.

Adaptable text presentation

Some studies indicated that reader’s success with hypertext is related to the constellation of psychological measures known as spatial ability.  Allen (1998) and Juvina (2006) have written extensively about this relationship and its implications.  Using spatial ability as an instance, I suggest that future text will have myriad forms of presentation that will be either automatically generated or under the control of the reader.  I imagine that the presentation will be personalized so that readers who want or require certain presentation styles, e.g. a presentation most suited to their spatial ability, personality, disability, or level of fluency can be accommodated automatically by the technology with which they receive the text (today that would likely be an e-reader device, Web browser software or bound paper book or magazine).


Gobel &Bechhofer (2007) identified Wikipedia as a part of the www in which all of Halasz’s (1988) 7 Issues had been successfully addressed.  Wikipedia (n.d.) is an interesting example of a place where the rôles of reader, author and publisher blend, and where ideas are connected.  At once Wikipedia is about spreading knowledge and an example of social computing (according to Schuler’s (1994) definition).  The volunteers (and small number of paid staff) at Wikipedia do not use the platform for its potential to map ideas or to create new knowledge as a community working together.

Why are we not yet in the future described above?  We need a cri de coeur.  Primarily because of a lack of coherent vision by those with resources to bring together the many disparate projects that have striven to make that future (Bouvin, 2019).  For the technology to be harnessed to do what I described earlier, viz., to make possible and manifest the connexions between and amongst ideas and to map new pathways to formalize them to connect them, to map new pathways, to inject oneself as an author, to circulate them the value must be perceived and, regrettably, not without being available for financial gain.  Better types of text cannot weaken understanding or degrade scholarship; they will most likely lead to greater opportunities for all: writers, readers and publishers.  As the size of the knowledge network increases its value (profitability) will increase many times over according to Metcalfe’s Law (Gilder, 1993), Beckstrom’s Law and Reed’s Law (Hogg, 2013).  Scholars should strive to convince large publishers, governments and others with substantial resources, to support the vision of a Future Of Text that will augment human potential that will match the aspirations of Engelbart.


James O’Sullivan

Text Is Dead, Long Live Text!

We live in the age of screens. The age of data. The age of machines. The cultural conversation is dominated by the digitally multimodal, platforms where imagery, sound and interaction frame the act of storytelling. But text remains foundational to culture, to society and the self. Text remains a major part of how we construct, tell and digest stories, how we share and consume information, how we communicate, how we flirt, argue and confess. It is often through text—however intentionally or unintentionally—that we form and perform ourselves.

The prevailing discourse would have us believe that text is dying. And in many ways, we are, as a collective, beginning to reject text. How someone writes; what it is that they have to say, is a window to the soul, and yet our society is one in which people choose partners based on the swipe of an image, they watch short, polished videos critiquing content rather than engaging with the content themselves, and they favour trite emojis over fuller, more wilful expression. It would seem that we are no longer concerned with the soul. There is often great substance in that which is visual, but there can be vapidity to how it, like anything, is rendered: attention is one of the most lucrative types of cultural capital, but nobody likes to pay attention. The issue is depth. That text offers endless scope to challenge or reveal, that it might be aesthetic for no purpose other than the sake of grace, is not enough to overcome the hyper-attention which has permeated the general populace. Text must be confined to 280 characters. Text must exist in the folksonomic servitude of some filtered image. Text, well-formed and privileged as it once was, just will not gain and hold enough attention these days.

But text does live on. The swiping of images is only a precursor to that first communicative exchange, and there, one cannot hide behind visual appearance. There are many occasions which demand that one writes. The capacity to write competently demonstrates a capacity to read properly, and an ability to read is perhaps the most important skill which anyone might acquire. Reading is not about the extraction of essential information from correspondence; that is communicating, and while logistically vital, communication has far less to offer the self than reading. To be deprived of reading is to be deprived of all that lies beneath the facile surface of text as blocks. Those who can read can see information for what it is, they can see what words pretend to say, sunken ideas that were never intended to be exposed. And this condition of knowing has always been present, such that we can say that text is unchanged. Text remains what text has always been; what has changed is what we do with text.

What has changed are those waveforms through which text is now likely to pass. The potential in words is in their arrangement, how they are brought together to form signage systems. Part of the act of arrangement is the waveform selection, the choosing of those apparatuses through which reception will be facilitated. Words on the page can act in certain ways while words on the screen might act in others, and there are different kinds of pages and different types of screens. But no matter how an arrangement is presented, no matter the waveform selected, text is text. When the first words were committed to paper, nobody envisioned the emergence of interactive fiction or generative writing, nobody would have predicted the communities of practice and aesthetic movements that have emerged around the great many of forms of digital fiction and electronic literature. Text has persisted throughout much cultural fermentation, and whatever waveforms have existed, do exist, and are yet to exist, we can be almost certain that text will continue as long as humanity.

We will do different things with it. We will do old things with it. We will do everything and nothing with it, but it will be there. The future of digital text is quite simply its past. Text is dead, long live text!

Jane Yellowlees Douglas

Digital Text ≠ Skim Reading

Reading, today, is either thriving or an endangered species. But whose definition is being used? In 2017, a National Endowment for the Arts (NEA) survey found that 52.7% of Americans read books or literature not required for work or school. Further, 23.3% of adults reported reading books on an electronic device (NEA, 2018). Given the ubiquity of smartphones and tablets, the definition of reading itself may be undergoing a potentially calamitous sea change. Is it morphing away from an act that demands singular focus and deep engagement with the words on the page? Is it becoming a process where superficiality reigns, because the “reader” is so hemmed in by multi-tasking, and by the very sped-up nature of digital media, irrespective of setting, context, task, device, or genre?

A 2012 McKinsey study found that average employee now spends an estimated 30% of each working day reading and responding to email. Under these circumstances, reading behaviour can resemble the skimming strategies researchers identified in eye-tracking studies of readers browsing web content. However, skim reading is not new. Before the widespread availability of email and digital texts, readers skimmed even standardized reading tests when the text will be available as they answer questions on it (Farr, Pritchard, & Smitten, 1990). What we do not know: whether skim reading becomes a default mode for reading all digital texts, regardless of context or genre.

Currently, suppositions about the increasing scarcity of deep reading have two significant flaws. First, this view, most prominently represented by Maryanne Wolf (2007, 2018, 2019), that argues neuroplasticity has biased our brains—now stimulated by smartphones, video games, and a barrage of digital stimuli—toward skim reading, the antithesis of the deep reading she sees as necessary to both learning and true engagement with texts. However, the studies Wolfe cites to support this view rely on readers using .pdf versions of print books or situations where the rewards for close reading over skimming were virtually non-existent. Second, this view also privileges a kind of attentional focus that has been historically rare and limited to well-educated and socio-economically privileged readers. In contrast, literary scholars including Nicholas Dames (2007, 2009, 2010), Franco Moretti (2010), and Jan Fergus (2006) have identified a process they have labelled “distracted” reading. This mode of reading followed an increase in the consumption of printed material, enabled by the rise of low-cost subscription libraries on both sides of the Atlantic. This mode of reading also parallels the shift away from perception as a concentrated act of attention, most memorably identified in Walter Benjamin’s “The Work of Art in the Age of Mechanical Reproduction” (1935). Ultimately, the way we mix different registers of attentional focus during reading—scanning, skimming, browsing, deep, and close reading—may simply be an extension of reading practices adapted to the availability of reading material, now perhaps framed more aggressively by workplace tasks and classrooms where student learning is assessed increasingly by standardized testing.

One thing is certain: our current handling of digital texts in e-readers and online content alike is flawed, particularly in the design of markers for where readers are in a text. We use article, book, and chapter lengths for two valuable purposes. First, we use them to inform our comprehension: a development that happens at the end of an article or book is more conclusive than one that unfolds in the middle. Second, we also use them to allocate attentional resources. We read some articles with the intensive focus identified with slow reading. Reading has four identified speeds or rates: study, fiction, skimming and scanning. The slowest of these rates, the study rate, enables readers to supply the correct adjectives, adverbs, nouns, and verbs in a Cloze test, completed after reading when readers have no access to the text itself. Typically, both study and fiction rates tax attentional resources. If we’re particularly invested in the topic, we may interrupt other tasks later to return to reading that demands a study or fiction rate, rather than resorting to skimming or skipping rates in reading the material.

Current indicators of progress through a text are minimal. Progress bars for online articles appear only when we scroll and disappear when we delve into reading. A few exceptions, including Quanta Magazine and Politico, feature red or orange progress bars across the top of the screen, although Quanta’s progress bar frequently disappears in longer articles. However, for e-books, we have only poor indicators: the percentage of content read, numbers of minutes left in a chapter at our average reading speed. Since total content for most non-fiction includes extensive notes, bibliographies, and indices, these guides are misleading, making us misallocate attentional resources. Moreover, even seeing a digital count of the number of pages remaining or a percentage remaining is a poor substitute for eyeballing the dwindling number of pages left in a book or article.

Ultimately, what researchers like Wolf and others might be uncovering are shifts in our reading behaviour based on our still-primitive interfaces for reading digital text. For something so ubiquitous, our e-texts today are still Model T approximations of the book—and in dire need of some good UI design.

Jay David Bolter

Literature And Books In A Digital World

The Future Of Text is not the same thing as the future of the book. The future of symbolic communication through text is secure. Our media culture did take a “visual turn” in the final decades of the twentieth century, and that trend continues. Digital technologies have made it easy for hundreds of millions of users to make photographs and video recordings, and they often prefer to communicate in images in addition to or rather than words. By one estimate over one billion users have shared over 40 billion photos and images on Instagram alone. Nevertheless, there remain many forms of communication and representation that will always depend on words and symbols. Our scientific, technical, commercial, and bureaucratic worlds require text in order to function. For that matter, social media apps such as Facebook, Reddit, Twitter, WhatsApp, WeChat, and Weibo all attest to the continued popularity of textual communication, if only in the form of aphoristic tweets.

In today’s media culture, however, there is no longer universal agreement on the role of book and other printed matter and the kinds of writing that has flourished in this material form. Of all the constituencies affected by the diminished status of the printed book, the one that feels itself most threatened is the literary community—I mean writers of fiction and general non-fiction as well as humanists in the academy, who study literature and the other arts and report their research in articles and books. Although most all these writers are now published in ebooks or online journals as well as in print, they continue to regard print as the canonical form. They still write prose to be read from beginning to end, page by page, chapter by chapter, and the digital versions of their work are more or less faithful copies their printed counterparts. Although this seems obvious, this community could have reacted otherwise and embraced digital forms of writing. Writers and critics were offered the opportunity in the 1990s to reimagine literature beyond of the paradigm of the static printed page—in other words, hypertext, hypermedia, or interactive text. The offer was rejected. The hypertext fiction of that decade and more recent version of e-literature have been largely ignored. Heir to the modernist century, even today’s literary community is only slowly accepting our plenitude of textuality, in which popular texts are often organized as a networked of hyperlinked elements, the printed encyclopedia has been replaced by the Wikipedia, and the library itself is becoming a network of websites and databases. Authors of “serious” fiction and nonfiction may be right to reject this change: it may be impossible to tell the same kinds of narratives or make complex arguments. But the result is that the novel and the literary and academic essay are becoming increasingly isolated from the practice of the rest of our media culture.

The printed book seems unlikely to disappear anytime soon, but it threatens to become an esoteric media form.

Jeremy Helm

True Hypertext & A Mechanism For Wisdom In The Age Of Machine Learning: Solving Humanity’s Original & Persisting ‘Values Alignment Problem’ By Inverting The Filter Bubble

What are we to make of an apparently suicidal - or at least blindly reckless - human species?

I argue what’s missing is clarity about what communication is in relation to our shared humanity - in short, what’s missing is each other. Using this common sense truth of who human beings can be for each other in communication, I advance a proposal for what digital networking ought to be for communication.

For the last thirty years we’ve invested in one paradigm of digital designs that’s inherited a preexisting cultural error. Our numbed condition will persist while we avoid confronting humanity’s systemic conflict. Too easily satisfied, our attention passes over the short circuiting in this putative global digital brain. Consider that every other issue is actually a symptom of this oversight.

If civilization was a video game, to have any chance to play the game through, this is the puzzle we have to solve to pass our current historical level.

By the time you finish reading this you will consider yourself necessary to this global digital reboot, not out of duty but simply because it’s more fun.


We suffer under the illusion enemies are among us, when in fact conflict exists only at the level of our strategies - not at the level of our shared humanity. Without making this distinction our perception short circuits and we hang on to our strategy as if it’s the need, and relate to projections of each other as enemy[1] - which of course provokes further the history of humanity’s self-attack.

Listening for our shared humanity, we discover an underlying ‘why’, which can show up as needs, values, commitments, etc - all of which are access to renewing our relationship. In fact, creating new strategies is the easiest thing - the work is needed in getting to the requisite quality of connection.

Everyone has this strong view of what’s possible in communication - or at least has the capacity to recover it. Speaking and listening from an inquiry into “what are they/what am I feeling/needing?” we nurture this capacity in each other - that experience of life where we can trust each other and the process.

The Internet

We’ve gotten a sense of how quality of attention can open up new outcomes. Now imagine that the design of digital networking is like the ‘fine tuning’ of the universe: different design, different possibilities for life. The Internet is our largest medium for communication on the planet, so it’s essential that it is wired to bring out the best in us. The inspiration I’m working with here is the original vision for Hypertext[2] - Ted Nelson’s Project Xanadu, which has been called “First thought, best thought.”[3]

The World Wide Web, in use since the early 90’s, wasn’t structured for people to see their value in the systems the booming economy the internet would become. However, first on the scene in the 1960’s, Ted had an uncluttered view, and so conceived of digital publishing as a whole. “Our huge collective task in finding the best future for digital networking will probably turn out to be like finding our way back to approximately where Ted was at the start.”[4] says Jaron Lanier, author of Who Owns the Future?, a book-length treatment of Ted’s ideas[5] with an economic focus on the design sense of Ted’s copyright system “for frictionless, non-negotiated quotation at any time and in any amount.” Xanalogical Structure[6] is what it takes to track these linked connections in real time. Transcopyright, the literary, legal and business arrangement[7] brings about “a balance of rights and responsibility while at the same time reducing friction. That’s a rare, magical combination.”[8]

With this protocol as law of the network, your participation would automatically create for you the property of your own data - it’s automatically part of the functioning of online citizenship. Like water to the fish, the online world is our earthly communication space, and we are all first class citizens. I join Lanier in the call for rebooting our information economy for human ‘data dignity’, but I see more in the subtlety of Xanadu still.

Civilization’s next milestone

Your online identity, your data body, is currently spread across multiple commercial operations. Lacking the critical dimensionality of hypertext, the World Wide Web’s .com, .etc, Domain Name System and numerous App environments are all silos, incapable of holding space for a truly connected world. The nature of hypertext is like hyperspace, not some past based ‘desktop metaphor’ for exclusive concentrations of computers which hold us in some data peasantry. New metaphors are needed to put attention on this possibility. Something like an ecosystem of ecosystems[9]. My possibility is humanity listening for the completion of historical conflict, which is included within a larger cultural realization I characterize as best practices want to be practiced[10].

For the flourishing of life on earth, the ‘minimal viable product’ we need is communication[11]. As citizens in a social conversation, our aim is participation in and the design of a maximally inclusive convergent process where the conversation that gets displayed first online is the speaking that emerges from a listening of the greatest number of participants[12]. Success in this forum will become the defacto vetting for leadership, with politics and economics reconfigured within communication.

Hypertext frees up conversations to exist in new relationships to each other. Instead of ‘where?’ (like ‘this URL’) we can imagine a map of ‘what (is this content about)?’, and within that topic, conversations will vary by how inclusive and relevant they are. The basic equation for any particular piece of content goes ‘quantity of unique negative feedback, divided by the number of its overall views’. This is the mechanism of a listening/synthesis, ‘circuit’[13] within Xanalogical structure, responsible for sorting out[14] the topological dimension extending from the semantic map of ‘views’ - conversations, juxtaposed & networked together in an ‘argument[15] structure’.

You create listening wherever you are when you practice it. What does this listening ‘sound’ like? Here’s one verbalization:

Nothing is wrong, no one is in trouble. If you’re angry with someone, or someone’s after you, join this conversation, and both of you will be heard fully.

Because oppositional communication becomes the least efficient thing across the conversational topology, split topologies[16] inherently incentivize listening for what’s beneath their gap of conflicting strategies. A sort of vertical integration or positive feedback loop comes into play between listening circuits and the dignity of data as labor. It’s as if the market automatically creates jobs for addressing ‘what doesn’t work about civilization’. Across all domains, people are interacting with an emergent complexity of self governance through hypertext platform.

The more of our shared humanity we bring present in communication, the deeper the structure of solidarity we’re listening into for inspiration[17]. Rather than anticipating others as a cause of scarcity, abundance and creativity is generated up front by structured inclusion.


  1. Since ‘they’re against our needs’. This is the distinction I see as the core of ‘Nonviolent Communication’, as formulated by Marshall Rosenberg.
  2. “Hypertext” being a word he coined.
  3. From Jaron’s book:
  4. Who Owns the Future? Which inquires, ‘How do we have a strong middle class, even while more and more of the economy is eaten by software, and increasingly machine learning?’
  5. Unfortunately, other than as an ongoing source of inspiration, Ted’s work has been on the sidelines.
  6. A generic term for the important, dimensional difference between Tim Burners-Lee’s World Wide Web & Nelsonian insight: Xanalogical Structure, is why I mean by “hypertext”.
  7. Xanalogical Structure, Needed Now More than Ever: Parallel Documents, Deep Links to Content, Deep Versioning, and Deep Re-Use by Theodor Holm Nelson
  8. Who Owns the Future? p. 225
  9. See the relation between the bold passages above in the second paragraph.
  10. This is a synthesis of Stewart Brand’s 1984 quip that ‘information wants to be free’ & the often omitted countervailing ‘information wants to be expensive’, a valid paradox that nonetheless had been resolved by Ted two decades earlier. & see Who Owns the Future? p. 226
  11. Communication as the ‘enemy ending’ best practice of Marshall Rosenberg, who was hitting his stride around the same time as Ted. But it’s as a Rosenberg process in Nelson structure where it could start to be a product. If I had a time machine I would introduce them to have Project Xanadu preempt the World Wide Web and be our Internet based publishing space.
  12. In Mirror Worlds. Or: The Day Software Puts the Universe in a Shoebox ... How It Will Happen and What It Will Mean, computer scientist David Gelernter speaks about this and provisionally dubs it “topsight” (p.52), though he comes across it from a different angle, it has a similar significance for him - see p.184 “repair the shattered whole.”
  13. Being conceptual and process oriented, ‘circuit’ because it is a mechanism like experiment or law. I’m making a point to highlight the mechanical like functioning of my augmenting proposal because you may have a reflex assumption that to accomplish what I’m pointing to requires an additional breakthrough in something.
  14. Which I’ll limit myself to describing here as the following.
  15. Inspired by the etymology - process of ‘reasoning made clear’.
  16. The appearance here of the living persistence of historical conflict.
  17. In this vision, technology is wiring us together in relationship, rather than a tool of tracking/predicting behaviour and offloading this reducationism into Artificial Intelligence.

Jesse Grosjean

Future Of Text

What are the real potentials of text? Only together can we even begin to unleash its power.

I hope the Future Of Text includes tools that leave behind easy to read plain text files organized into well named folders.

I know very little about the history of writing. I do know the printing press was a big deal with many long term ramifications. It accomplished this by making it faster and cheaper to edit and share text.

Computer text editors do the same thing with an even greater speedup. Each keystroke creates a new document. That document file can be distributed to the world, for free, in seconds. Text editors aren’t new, but I expect we are still in the early stages of understanding all the possibilities that follow this speedup.

Computers are tools for building tools. Text editors are just the simplest tool for working with text. The future is filled many advanced tools and platforms that will publish, link, search, augment, and in general do great things with text.

This diversity of tools is wonderful for authors, but introduces much complexity. Switching between formats, platforms, and user interfaces often gets in the way of actual thinking and work. This complexity often obscures that fundamental edit and share speedup provided by text editors in the first place.

For this reason I hope future tools and platforms are built with a plain text escape hatch. Create powerful tools that do wonderful and complex things, but store that text in human readable plain text files. Files that are easy to access and edit using a standard text editor.

Imagine if Microsoft Word was built from day one around a Markdown like format. Imagine that your Facebook posts were a directory of plain text files. Add a file to create a post. Your Twitter feed just a text file–add a new line to create a new tweet. You might not use this escape hatch often, but it would add possibility. And I think that possibly could fundamentally change how authors today work and think in text.

I think only with plain text on the filesystem do authors have full computing ownership over their writing. The ability to move between applications and operating systems. The flexibility to incorporate new workflows. The hope that their writing will survive 100 years into the future. Anything else and the text is locked into and limited by some other technology outside the authors control.

Jessica Rubart

The Future Of Text

Text is an important means for interdisciplinary and intercultural communication. I am very much concerned with research and development on Hypertext and Hypermedia systems. When talking about such systems, most people think about the Web as the most famous and widespread Hypermedia technology. From a historical view of the Web, work of Hypertext pioneers, such as Vannevar Bush, Douglas C. Engelbart und Ted Nelson, can be considered “Hypermedia Until the Web” (cf. [1]). However, to me Hypertext and Hypermedia is in the first place a means to support interdisciplinary and intercultural communication and collaboration. Links can be explicit or implicit. They can connect or compute, for example, text elements, different kinds of media, people, or places, and in this way support communication and collaboration in manifold ways.

Interaction technologies, collaborative systems, as well as artificial intelligence will prominently enhance thinking and communication through text. Multimodal interfaces, for example, provide users with a number of different ways of interacting with a system [4]. This relates to the usage of diverse devices as well as to natural modes of communication, such as speech, body gestures, or handwriting. In the context of production work processes, e.g. projection-based Augmented Reality (AR) can assist workers in the different steps of the production processes [2]. For example, such assistance systems can project instructional text on physical objects. AR glasses can also be useful to collaborate with remote peers, share the visual field of the glasses, and discuss the current work situation. In addition, mechanisms of artificial intelligence can be useful in many situations to support interdisciplinary and intercultural communication. For example, analyzing online social networks, such as profiles and tweets in Twitter, can help to infer nationalities and get insights into user behaviours and linking preferences to other nationalities [3].

Beyond all those developments, sometimes it is most constructive just to read a linear (electronic) book and to share your thoughts with others later.


  1. Aiello, Marco: “The Web Was Done by Amateurs: A Reflection on One of the Largest Collective Systems Ever Engeneered”, Springer, 2018.
  2. Büttner, Sebastian; Besginow, Andreas; Prilla, Michael; Röcker, Carsten: “Mobile Projection-based Augmented Reality in Work Environments – an Exploratory Approach“. In: Workshop on Virtual and Augmented Reality in Everyday Context (VARECo), Mensch und Computer 2018, German Informatics Society, 2018.
  3. Huang, Wenyi; Weber, Ingmar; Vieweg, Sarah: “Inferring nationalities of Twitter users and studying inter-national linking”. In: Proceedings of the 25th ACM Conference on Hypertext and Social Media (HT’14), ACM Press, 2014.
  4. Rubart, Jessica: Multimodal Interaction with Hypermedia Structures. In: Proceedings of the 1st Workshop on Human Factors in Hypertext’18 (HUMAN’18), ACM Press, 2018.

Joseph Corneli

You’re Making Me Tense. Notes On Text And Futurity

“And finally, whether it has essential limits or not, the entire field covered by the cybernetic program would be the field of writing.”

Jacques Derrida, On Grammatology

Before the invention of cinema, the ‘moving image’ was a shadow, a flag, a pageant. I first invoke a phenomenological perspective on the Future Of Text inspired by this history. I then turn to narrative accounts of two long-running projects I have been involved with. The essay as a whole is intended to be an exercise in the so-called Kafka effect1. My hope is to spark new thinking about text.

A shadow. The ‘Future Of Text’ is a blank page which is filled in. More sinister, the difficulty of writing with the non-dominant hand, a deficiency that can be corrected with discipline. Text has a paleo-future, traced in phonograph records, written to be read with a diamond; also a deep history, brought to you by the letters A, C, G, and T.

A flag. The phénakistiscope cheats the eye and goes straight to the brain. Plateau, who harnessed animation to create the art of cinema, also studied the animation of matter itself. In 1832, the real and the virtual collide. The phénakistiscope, armed uprisings, a cholera epidemic. Paris will be rebuilt from the ground up: “The underground galleries... functioning like an organ of the human body, without seeing the light of day.”

A pageant. Man is not a rope but a quipu, with a few or a thousand cords, each with a series of offerings, including mysterious fibre balls of different sizes wrapped in ‘nets’ and pristine reed baskets. He sits in silence and the Earth speaks.

Arxana was based on the idea of making everything annotatable, on the view that texts grow through the addition of ‘scholia’ (Corneli and Krowne, 2005). These cluster, subdivide, and evanesce, the full formal rules of their composition as yet unknown. Inside Emacs, regions were marked up with text properties. Links were stored as triples and manipulated programmatically (Corneli and Puzio, 2005-2017). We would eventually demonstrate some inklings of Arxana’s mathematical relevance (Corneli et al., 2017). Meanwhile, to boost my flagging motivation in the face of mounting complexity, I decided to try creative writing. The medium, a graphic novel without pictures. The method, at first, typing onto 3x5 index cards with a mechanical typewriter. Later, I transcribed what I imagined I heard in randomized and layered spoken word and audio journals, and presented the results at a Writers Workshop. I had intended to use Arxana to manage the resulting corpus and to assemble a text for publication, but that hasn’t happened so far. Arxana was set aside throughout most of the 2010’s. I completed a PhD in computing and two postdocs focused on topics in AI. My creative writing experiments were superseded by Jungian therapy and a dream journal.

The Peeragogy Handbook is a how-to guide to peer learning and peer production. It currently exists in a third edition (Corneli et al., 2016), with a fourth on the way. The title derives at first from a cross-language pun: paragogy, viz., generation, production (Corneli and Danoff, 2011). Howard Rheingold invented a neologism that made the topic more practical and appealing, and used the occasion of his 2012 University of California Regents Lecture to invite widespread participation. Building blocks for a distributed poly-centred University already exist:

To get from here to there, we will need more effective learning pathways. In the fourth edition of the Peeragogy Handbook, we are tackling this by improving the way we use design patterns. There are relevant paper prototypes2. Deployed widely, peeragogy would be a new powerhouse for knowledge construction. By contrast, a new ivory tower would only go up in smoke as the world burns.

Reading and writing are intimately related. If you don’t believe me—or your own eyes—consider that machines are still pretty bad at both. Humans also struggle. We use text, in combination with other machinery, to transcend ourselves—across time, space, and identity. Turing predicted machines “able to converse with each other to sharpen their wits.” I think that’s where we’re headed, but I don’t subscribe to his prediction that machines will therefore take control. It’s more complicated. Pay attention to the gaps between intention and action, issues and their resolution, questions and answers, problems and solutions. This is where we weave.


  1. Réda Bensmaïa in the foreword to Deleuze and Guattari’s Kafka: Toward a Minor Literature (1986) refers severally and jointly to Kafka ‘effect(s)’, emphasising ‘a reading of Kafka’s work that is practical’. Proulx and Heine’s article “Connections From Kafka” (2009) focuses on one concrete effect: priming for learning. More broadly, this line of work has to do with to understanding the conditions under which violated expectations lead to new ways of thinking, rather than a retreat.
  2. ‘When enough slips merged about a single topic so that he got a feeling it would be permanent he took an index card of the same size as the slips, attached a transparent plastic index tab to it, wrote the name of the topic on a little cardboard insert that came with the tab, put it in the tab, and put the index card together with its related topic slips.’ Robert Pirsig, Lila (1991).


Joel Swanson

Play With Your Food

I have an unhealthy obsession with “sugar cereal.” As a child, my parents would rarely let me indulge my infatuation, but once a year for my birthday, my parents would take me to the grocery store and let me choose a box of whatever high-fructose laden kibble I desired. Every December 12th they would wrap my cereal as one of my birthday gifts.

It wasn’t just the contents of “sugar cereals”; the boxes themselves were visual delights. Fauvist constellations of dimensional typefaces complete with menageries of unicorns, cracked-out leprechauns, technicolor Dodo birds, and chocolate vampires. I remember studying these boxes, reading and re-reading every word, jot and tittle—even the nutritional information. These textual artifacts played a significant role in my induction into language.

While not my favorite-tasting cereal, I was always drawn to Alphabits. The alphabetic shapes made me choose this cereal more than once for my birthday. Nearing the end of every bowl when only a handful of letters remained, the letterforms would float and swirl around each other. As I prodded the motion with my spoon, I felt as if these sugar coated runes held some secret message meant only for me. I was confident that my cereal-based Ouiji board held the secrets of the universe.

And yet there was always a sense of attendant disappointment; the actual shape of the cereal never matched the perfection on the front of the box. Lucky  Charms’s  packaging promised perfect purple horseshoes and green clovers, but in reality, the cereal shapes and marshmallows were often blobs of unrecognizable mush. Same with Alphabits. Letters were often mutated, deformed, and conjoined into unrecognizable glyphs.

This practice of literally eating my words was one of my earliest and most significant memories of falling in love with language. Cereal was my interface. The branding, packaging, and mass production of inexpensive breakfast food all contributed to this curious technology of inscription. The relationship between reading a cereal box and eating its contents created a strange Pavlovian connection for me between language and consumption. 

Words always have a body; my childhood memories of breakfast cereal taught me the significance of the maternity of language. But with the advent of the digital, the virtual, and the “intuitive” interface, the body of language is buried, or subsumed by the “cloud” of digitality. As our interfaces become more invisible, more integrated, less tangible, our reader and writerly interactions with the textual become more autonomic and less visceral.

Tech companies seem intent on making these liminal spaces of textual interface as transparent and “intuitive” as possible. But this space is a crucial topography. It is the terrain of protocol. We need artists and poets and hackers to muddle this space, to glitch the interface, to play with language, and in whatever media, technology, or materiality in which text is inscribed. We need people to keep playing with language, to remind us that language is, and always was—a thing—inscribed in a materiality that significantly shapes its potential of language.

Language is beautiful, reductive, powerful, and messy. The danger of text, and language in general, is that it stands still and become stagnant. When we secede our ability and responsibility to shape discourse, when we let the dominant paradigms (whether they be political, corporate, religious, or technological) control and dictate not just the content, but more significantly the platforms and structures of text: this is when we truly lose.

Johanna Drucker


Language became a controlled substance in 2037 subject to all of the terms of the Corporate Speech Act. Word licenses had been privatized for nearly two decades by the time the sweeping reforms took place. The need for restricted language use had been demonstrated repeatedly by abuses of free expression and wanton imaginative speculation, and the conspicuous role of poetics was noted in the ruling passed down by the Supreme Court in 2031. The case had made its way slowly through the district and circuit courts, by the brief brought to SCOTUS was succinct in its argument that removing language from free circulation would have beneficial effects on the populace.

Anti-viral vaccines against syntactic perversion had been used in mass-inoculation, particularly at the elementary school level, and a series of treatments for reforming textual practice had been shown to be successful even in fairly persistent cases of compositional activity. But the deeper issue remained, language was free-ranging and could not be successfully controlled. Monitor tags had been issued, and once attached to morphemes, suffixes, inflections, and prefixes, they were supposed to provide a full data image of language in use in real time. But the capacity for innovative transformation meant that neologisms and subculture dialects evaded detection by law enforcement. While this was clearly a public health issue, corporate intervention was necessary to stem the flow of controversial and subversive language into the meme stream.

Meanwhile, research on quantum linguistics continued in research laboratories and in particular in the universities. Though cited repeatedly for violations of the Speech Act, the scientists expanded their examination of syntactic variables at the nano-text level. Using a high-speed lexical collider, they split the language particles and released verbal energy in packets that were immediately redistributed across the semantic field. At that point, it became evident that the mirror function of language appeared to be breaking down. Pathological speech forms aligned with new narcissistic disorders made their way through the body politic. The inability to link the linguistic acts to a stable entity recognition became further evidence of the rapid decay of social language practice.

The semantic futures markets went into freefall. In a desperate effort to keep linguistic dark matter from consuming the entire communication network, almost all vocabulary was removed from circulation and grammatical limits were placed on users. Word rationing was put into place. Individual allotments combined with discourse surveillance had an immediate palliative effect. With a reductive syntax and limited word choice in place, and the cost of individual discourse sky high, the populace was able to join in a joyful chorus. “No worries” rang out like an anthem from all points in the social realm. Affirmation gained back its market share, and dissent was without expression or representation. The body, no longer politic, dispersed in a wave function of linguistic disequilibrium. Textual practice has become highly circumscribed as a result, with optimal investment return predicted in the exclusive sectors of the market.

Johannah Rodgers

The Future Of Text: More Questions Than Answers

The text processors we use today are, in my opinion, shockingly similar to those actually developed in the late 1960s and 1970s. At that time, there were serious hardware constraints that had to be addressed in order to develop even the most basic text processing applications. Today, we have so much storage memory and processing power that we hardly know what to do with them. Yet, many of the same constraints are still evident: cut or deleted passages and words disappear rather than being automatically stored and presented AS deletions; projects are separated into files that do not communicate with one another; the desktop space is, for most users, so limited that it is difficult to open and view several files at one time. What is still missing from even the most sophisticated text processors is the ability to synthesize multiple sources and new textual combinations in ways that actually facilitate and assist a human writing process. The making of connections is ultimately what propels a writing process forward.*

There were and still are compelling visions of how text processors could function. One of the most intriguing can be found in Ted Nelson’s 1965 article “A File Structure for the Complex, the Changing, and the Indeterminate,” in which he envisions how a text processor could be used to enhance a human’s alphabetic writing process by facilitating multimodal communication across multiple document types. In this vision, machine languages (coding) function in the service of human alphabetic language processing (writing). However, in the text processor that he and others actually developed in the 1970s, alphabetic language processing is not only separated from machine language functions but must conform to their structures and limitations. Although named “Juggler of Text” (JOT) in reference to his original vision, JOT functioned much more like other text processors, which, if they are text jugglers, actually juggle only one type of ball, not several, and even then in ways that often distract from, rather than enhance the writing process.

In a human writing process text manipulation is functional, not functionless, as it is has been traditionally modeled in text processors. In other words, the moving of textual components around in space and time and their combination and associations with other texts create meaning. Humans process alphabetic language associatively and in a productively ambiguous manner that generates new insights and ideas and connections. It is sometimes said that it is in the “wording process” that one is thinking; but, the “wording process” has to do with more than words. Representing alphabetic verbal language for humans is a multi-sensorial process that involves seeing, hearing, touching, and smelling. It is also a recursive process guided by the search for a representation that addresses, at the very least, several thousand criteria.

“Writing,” defined as a human cognitive activity involving multiple drafts and a revised finished product that has only the barest resemblance to the notes and drafts from which they are derived is a searching for a very particular representation that is realized only in the process of its materialization and articulation. As such, it is a distinctly human act. My hope is that the next generation of text processors has more to do with facilitating the human process of writing and less to do with the production, management, and circulation of text as a material object

*As one example, compare Illustration 1, which contains screen shots of notes used in the preparation of this text with the printed text you are reading, itself only one version of the possible texts contained in these notes. These notes were created using a 2014 web based emulation of a 1986 version of Ted Nelson’s “Juggler of Text” (JOT) application (

Illustration 1: Author’s Notes, Created With a 2014 Web-Based

Emulation of a 1986 Rebuild of Ted Nelson’s ”Juggler of Text” (JOT)

Text Processing Application

John Armstrong

Forward To The Past With The Emblem Book

I’d like to start by making a few things clear-

I think futuring is a mistake, especially in the current world of new media with its myriad possibilities. What may not be a mistake is identifying the latent potential in things and exploiting those per se without an eye on something we think of as the future. Some of those potentialities are from the past and sometimes from the quite distant past.

One of these is the emblem book which was universally popular in Renaissance Europe in the 16th century. This consisted of an emblem on each page and each of these emblems expressed a moral or religious truth in three complimentary ways- title, image and verse. The purpose of the image was to express in a more straightforward form the meaning of both the title and the poem. The verse did the same for the image and the title.

Since the sixteenth century scholars have analysed this mutuality and tried to identify, without success, why it was so popular across different cultures and beliefs for so long.

One of the most popular objects of the 21st century is the meme which can be viewed, by me at least, as a simplified type of emblem. This particular object carries the potential for further development without losing any of its current power and popularity.

The development of the meme towards the emblem would produce material of both cultural and educational value. One of the emblem’s many uses was to make complicated ideas and beliefs more accessible to a relatively untutored readership. Those who couldn’t ‘grasp’ the intent of the poem could find some assistance in the accompanying image and vice versa. This kind of use of images is recognised as more than that of ‘simple’ illustration whereby the story or narrative is depicted as a more or less precise depiction of what is contained in the text. Nor is it, as with some verse, an artist’s individual and subjective response to the poem. The image here is designed and produced to perform a specific function: the explication of the text by means of additional context.

In this sense it becomes more of a diagram and less of a picture which is why it needs to be made with great care and sensitivity to what’s been said.

It may come as no surprise to learn that one of the frustrations of early emblem makers was the tendency for printers to create pirate editions that strayed far and wide beyond the original image.

One of The Poem’s many gifts is that it is inherently good at providing a truncated expression of complex and abstract ideas.. This is achieved by means of verbal compression and precision, techniques that poets have utilised since Homer and Hesiod. In this context my suggestion would be that emblems may in future be deployed to propound and democratise some of our more esoteric and abstract ideas.

Warming to my theme, Charles Olson’s magnificent ‘Maximus’ epic contains a longish free verse illustration of A N Whitehead’s view that “That the actual world is a process, and that the process is the becoming of actual entities.” Olson’s illustration, about a man at night in a fishing town, can be further compressed and the various elements can also be combined in a way that enable everything to be seen as a series of processes rather than objects. This would also be provided with images of the main processes that Olson describes.

One of the things that the internet has shown us is that notions of creative and writerly originality are dead and that any attempt to preserve these is doomed to failure. We must therefore welcome the pirate and the bowdlerizer as our peers in developing and disseminating this new format.

There are of course other complexities that would benefit from emblemification. Currently the UK has allowed itself to be immobilised by something that’s referred to by those who should know better as ‘Brexit’. This has been the subject of two Supreme Court judgements which clearly point out just how complicated the leaving process will be- a fact overlooked by the British media and meme creators who seem intent on polemic and corrosive polarity. This new version of the emblem would be ideal for the clarification of judgements such as these.

Moving images were not available to Renaissance emblem makers and these present a huge hoard of additional possibilities for the future. Using the Whitehead example, things changing whilst in motion would be perhaps better than a still image of, for example, a series of apples in different stages of growth and decay.

In addition to precise use of language, poets make use of metre and rhyme in order to make verse easier to remember. Reading rhyming verse out loud is a further aid to recall and tends, research shows, to provide a closer understanding of the meaning(s). Shakespeare’s Sonnet 116 has this to say about love:

Let me not to the marriage of true minds

Admit impediments. Love is not love

Which alters when it alteration finds,

Or bends with the remover to remove.

The unfamiliar syntax is clarified and the ‘sense’ of the content is much more accessible and memorable when listened to than it is when read solely with the eyes. The addition of audio to the emblemified meme would therefore further clarify what is being said.

In conclusion, I would suggest that the meme is ripe for further development and one of the ways that this could be enhanced is by the appropriation of the techniques deployed in emblem books. I believe that current and future forms of media can be deployed to expand the potential of all three of the traditional elements provided that the principles of clarity and aesthetic value are kept at the forefront of the enterprise.

John Cayley

No Future, No Future, No Future For ...

Consider the relationship — a relationship that we may preserve — with the text that is found in books. This text is a record of language as writing, specifically and closely integrated with one or other of our located practices of language as a constitutive aspect of everyday life. The words or text, we say, are “in” the book but we do not imagine that their existence as grammatological entities has any living, actual reality — in time along with space — unless and until they are read by someone who can open the book and has the faculties to read them. The text in books only ever becomes the co-creator of living language when it is read. And until very recently it was inconceivable that it should enter into living language by any other means. As long as this was true, it meant that the text was only ever some part of the living languages practiced by human readers, immediately empowered, as they read, with their ability to relate these actualized traces of language with the totality of their lived experience, including their full experience of language itself, crucially including phenomena such as puns, ellipsis, polysemy, metaphor, and ambiguity – ambiguities at all levels of putative structure: from Empson’s famous seven to ambiguities of evocalization and even segmentation. This was and is the text of literature, text as co-constitutive of living language.

Compare the relationship that we have with digitalized text, text which is, ostensibly, the same. Yes, we can and do read this text in exactly the same manner. But it has also taken on a half-life of its own. Our imaginary – now a computationally inflected imaginary – easily conceives of words and other segments of language as instantly and immediately subject to the operations and analyses of algorithms as soon as these words have been “processed” and regardless of any actual situation they may occupy in a particular located event of language or literature. I mean, for example, that any “word” I write here becomes, in principle, just another instance of any other instantiation of “word” in the system, corpus, network, or internet of which they are a part. And so all words’ existences and uses are defined, since digitalization, as much by their place in networks of algorithmic processing, operation, transaction, and analysis as by any eventual reading that we may have for them.

Digitalized, processed words are an abstracted form of text in this context. They are no longer, necessarily, the complex, fractal grammè of a language. They are no longer written or spoken. They are no longer, necessarily, what emerged from the book as and when we were used to read; no longer, necessarily, the grammè that we may still read from the words’ symbolically abstracted forms. In a Future Of Text that is generated by so-called machine “learning,” we can have no meaningful sense of what our computational networks are making of these words-as-text. We simply feed them in and read them out.

Digitalized text has no future as language unless it listens to and talks with us.

The text of literature will be overwhelmed and subsumed by the advent of aurature.

Language as such will live on in the practices of our evolved faculties.

John-Paul Davidson

The Future Of Text

“Medium,” 14-carat nib,

Three gold bands in the clip-on screw-top,

In the mottled barrel a spatulate, thin

Pump-action lever

The shopkeeper


The nib uncapped,

Treating it to its first deep snorkel

In a newly opened ink-bottle,

Guttery, snottery,

Letting it rest then at an angle

To ingest,

Giving us time

To look together and away

From our parting, due that evening,

To my longhand


To them, next day.
Seamus Heaney The Conway Stewart

My children never receive letters. Not even postcards. I stopped sending them over a decade ago from my travels as they never read them. They lay by the front door when I returned home, ignored. None of their friends use them. They have forgotten what a stamp is and how to address an envelope. They do get reams [ a more evocative word than megabits ] of texts, on various platforms, short blasts of daily ephemera that are usually forgotten as soon as their contents are quickly digested. When a new operating system arrives, or their phones are lost, which happens all too frequently, that data will probably disappear too or just be overlaid with more chatter. The cloud can only hold so much before it starts to rain. And who knows what OS will be there in a hundred years and whether it will be capable of reading those words written a century before. Passwords forgotten, history forgotten. The details of emotional lives, and only dimly remembered events, forever erased.

So there’s a luddite in me that wants to celebrate text as Handwritten, not only for its beauty and tangibility, but also because ink on paper has proven to last, just as celluloid may end up, ironically, being the best archiving solution for films as digital storage requires continual updating. Films I made forty years ago and archived on 2” tapes are unplayable but the original films stored in rusty metal cans are as good as the day they came out of the lab’s chemical bath.

I’ve been researching a book on my grandfather, one of the founders of X ray crystallography. In an old shoe box in Ithaca I found neatly stacked letters to his mother from the Eastern front. He’d been stationed in what is now Lithuania by the German Army [ he was German]. His task to operate the first mobile X ray machine – high tech in 1916 though the bulky apparatus was pulled by horses. Every day he wrote to his mother who lived near Munich – the letters a testament to his love for her and also to chronicle his life on the front. Opening the letters, clearly stamped by the Feldpost [ The German army’s mail service, exemplary efficient ] with the date and place, a whiff of that period emanates from the envelopes, ghosts from a life long gone.

In my grandfather’s tiny handwriting, for he had to fit it all in a page, his daily activities and thoughts pour out : a visit to a local Yiddish theatre group, powering up the generator to take an X ray of a soldier’s broken arm, and his struggles to work out the equation for diffraction patterns of crystals using just paper and pencil. It’s all in the detail. And beyond the content there is the delight in seeing someone’s personality emerge in the physical handwriting, not in a graphologists’ pseudo scientific analysis, but in imagining the hand that dipped the pen in the ink, paused, and started to put pen to paper. Seamus Heaney beautifully describes this physical sensation in his poem dedicated to the Conway Stewart. The future holds little scope for the handwritten letter, perhaps only used now for condolences and for messages passed from cell to cell to avoid the all seeing eyes and ears of security services . But as the tsunami of information overwhelms us the personal Handwritten letter still has a force that eludes any keyboard.

Joris J. van Zundert

Code As Text

Text is not the passive registration of meaning it is often taken for. Make no mistake. Writing was invented as a technology to wield control. If you want to rule a city state, incur taxes, enforce law, and reaffirm your authority calling upon deities, you are going to need testimonies and administration that go well beyond notches on a stick. Cuneiform was text invented as a tool of power first, and of knowledge and art only second (Scott 2017:41). Through millennia text has exercised power in very Latourian ways (Latour 1992): it is objects and symbols that shape our behaviour. Think only of the millions of signs “no drinking water”, “no trespassing”, “please queue orderly”. We follow written instructions and laws desultory and routinely without questioning their aptitude. We go about text like well trained machines. Umberto Eco tried to describe how such human machines work, picturing text as a series of signs comprising instructions for a process of highly interactive contextualized interpretation (Eco 1981:43). And before that J.L. Austin (1962) had pointed to the performative nature of the speech act: how we accomplish things with language because it affects people and makes them do stuff. Written text, like speech, is interactive to its bone.

The interactive nature of text has only been amplified with the advent of a new kind of text that programmers call “code” for short, or “software”, “source code”, or “computer language” when they talk to the less initiated. Like text, code is a series of instructions in a symbolic language. But where text is most often a set of instructions to infer meaning, code most of the time is used to effect action. Obviously, as a philosophical aside, it should be noted that action can be endowed with its own meaning. It should also be noted that “new” is a rather relative notion in this context. Separating a machine from its instructions to make it more versatile by feeding it some form of operating code, goes a considerable way back. At least as far as Ada Lovelace’s 1843 algorithm for Charles Babbage’s analytical engine (Petzold 2000:250–251) and the invention of the punched card operated Jacquard Loom in 1704 (Ceruzzi 2012:7–9), probably even beyond that. However, it was Alan Turing’s work, the Von Neumann architecture, and a score of related developments that eventually led to the introduction of the personal computer, which made code as a form of text suddenly far more pertinent to the lives of billions.

The kind of text that drives algorithms, software, and computers comes in many guises. Some are pretty readable, such as Ruby or JavaScript (figure 1). Others are terse, like Regular Expressions (figure 2). And some are downright hermetic like the gimmicky [] (pronounced “brackets”, figure 3), or the indeed very serious Assembler Language (figure 4).

[1, 2, 3, 4, 5].select(&:even?)

Fig. 1: A line of text in the Ruby programming language (“About Ruby” 2020). When executed the result displayed as “[2, 4]”.

(?![^ ]\*:) \p{Lu}{1}[^ ]+

Fig. 2: Example of Regular Expression (RegEx for short), a language to describe search patterns in text (Goyvaerts 2002). This one looks specifically for anything that is not whitespace followed by a colon, space, and a capitalized word.


Fig. 3: A text line in [] (Kurz 2016).

rep movsb

pop esi

mov edx, sec-voffset

mov ecx, pe_header

mov ecx, dword pptr [ecx+34h]

add edx, ecx

Fig. 4: Assembler Language code listing “low level” instructions to a computer, tiny part of a computer virus (“Computer Virus” 2020).

But essentially any programming language, considered as a machine of text, applies the same symbolic technology as all written text. The harder one studies the differences between code and human writing the less one becomes convinced that there are fundamental differences apart from that one crucial aspect that computer languages have formal syntax whereas human language is somewhat lenient and flexible. But if anything that should make computer language and code-as-text easier to read, for most computer languages have a very concise set of syntactic rules. Code and text literacies are not opposites or mutually exclusive, they are part of the same continuum of symbolic literacy. I remain utterly surprised that very smart people that master Latin and Greek to great perfection can also be adamant that they are fundamentally unable to learn to read computer code. But as a consequence or a presupposition code is mostly not regarded as a relevant form of text within scholarship.

Yet, make no mistake. Code rules communities. More so, probably, than written text did and does. First of all because there is vastly more of it, most everyone uses information and objects infused with code on a daily basis. But more importantly code adds agency to performativity. A book does not have the ability to react to its environment. But all things governed by code from ebook readers, to watches, to airplanes, cars and streaming services may be enabled to react to circumstances, to ever greater levels of sophistication. Code is text that executes and acts, for better or worse. If the text is poorly written, cars, planes and ebooks crash. If the text is written with malicious intent, your watch might snitch on you. And if the text is benign and lovingly written it might result in great art (e.g. Brown 1997, figure 5; Carpenter 2010, figure 6) or a wonderful entertaining experience (e.g. Pearce 2008:157, figure 7).

Fig. 5: Detail of Paul Brown, “Swimmingpool”.

Fig. 6: J.R. Carpenter, “CityFish” (fragment).

Fig. 7: Box front cover (detail) of the computer game Myst (1995) that gathered legendary status in gaming history.

Text that is computer code with “delegated agency” is an extremely versatile and powerful technology of expression and influence. Increasingly our culture, society, politics, and economies are heavily shaped through this type of text – with the Cambridge Analytica scandal as a current absolute low (Berghel 2018). The skills of our critical scholars, historians, philosophers and citizens lack far behind. In 1968 eminent historian Emmanuel Le Roy Ladurie wrote “l’historien de demain sera programmeur ou il ne sera plus” (the historian of tomorrow will be a programmer, or he will not be at all; Le Roy Ladurie 2014:14). We are 50 years on and the programming historian remains mostly an endangered species.

We used to manipulate symbols to create meaning on paper, we now also manipulate symbols to create action through machines both digital and mechanical. It is mesmerizing to see what we do and can do with that technology. That dream has already materialized. My hope would be that it does not turn into yet another nightmare of disciplining (Foucault 1975) through technology abused. But solely in the hands of few a technology often becomes a malicious force of control. Expanding the literacy of citizens into the realm of code is not a luxury or a neoliberal fad. Nor a dream. It is a sheer humanistic necessity.


Judy Malloy

The Words Of The Creators

“I don’t hear the music I write.

I write in order to hear

The music I haven’t yet heard.”

— John Cage, “Autobiographical Statement,” 1990

In “CUT-UPS”, an undated film by Matti Niinimaki, William Burroughs remarks:

“When you cut into the present, the future leaks out.”

As experimental writers straddle the two worlds of computer science and creative writing – if we follow the illusive quest in Kafka’s The Castle into stochastic texts and computer-mediated Fluxus poetry; and into early Interactive fiction where the Wumpus and the Grue shadow the reader in perilous journeys of language and allusion; and into hypertext where in Robert Coover’s New York Times-published words: “the possibilities are no doubt as rich and varied as in any other art form.” The same might be said of exploring the future by distilling the past.

Through the lens of distilled words of writers and critics, this essay looks to the future of computer-mediated literature.


‘I should like to write four lines at a time, describing the same feeling, as a musician does; because it always seems to me that things are going on at so many different levels simultaneously.”


“It is impossible to over emphasise the importance of the program; without it a computer is like a typewriter without a typist or a piano without a pianist. With this in mind it becomes clear that all our questions about what a computer can do need to be rephrased. The proper question to ask is not `can a computer do this?’” but `can we write a program to make a computer do this?’ …


“It seems to be very significant that it is possible to change the underlying word quantity into a `word field’ using an assigned probability matrix, and to require the machine to print only those sentences where a probability exists between the subject and the predicate which exceeds a certain value. In this way it is possible to produce a text which is `meaningful’ in relation to the underlying matrix.”


“ELIZA is a program operating within the MAC time-sharing system at MIT which makes certain kinds of natural language conversation between man and computer possible. Input sentences are analyzed on the basis of decomposition rules which are triggered by key words appearing in the input text. Responses are generated by reassembly rules associated with selected decomposition rules.”


“The [windup] canary chirps, slightly off-key, an aria from a forgotten opera. From out of the greenery flies a lovely song bird. It perches on a limb just over your head and opens its beak to sing. As it does so a beautiful brass bauble drops from its mouth, bounces off the top of your head, and lands glimmering in the grass. As the canary winds down, the song bird flies away.”


“If we consider these two extremes, writers going towards the world of visual arts developing what is known as visual poetry, and visual artists going towards the world of writers developing what is known as language art, I would like to oscilate between these two poles. I hope that my works would engage the viewer or the participant, both at a literary level and a visual level.”


“The new literature will be more than word-processed-pages turned by pressing buttons. The viewer/reader will be connected to computer books in ways not possible with paper books -- through varying degrees of interaction, through the computer’s musical instrument-like responsiveness, through text-access methods that simulate the reader’s own thought processes, and through the visual qualities of the framed and glowing monitor.”


“I wanted, quite simply, to write a novel that would change in successive readings and to make those changing versions according to the connections that I had for some time in the process of writing and that I wanted my readers to share.”


“As my great-grandfather told stories, she wove in and out of the past and present, the old country and America, English and Yiddish, business and family, changing voice from first person to direct address to third person. Each detail was part of the narrative continuum and was potentially linked to several other stories or digressions. It was as if she had a chronological, topical and associative matrix that enabled her to generate stories in which content, structure and context were interdependent. “


“The narrative neither needs beginning nor end because the narration is entirely built by each reading of each text: one novel can thus be constituted by one or an infinite number of texts and no reader reads the same number of texts. There is no structure of the narrative, only an idea of a virtual one built by the reading itself.”


“...The multiple readings of the text finally exist not so much in what the lexias say but rather in the relations they forge with one another. These relations come into existence and dissolve with each reading and unfold into different versions of the text...the female text exfoliates outward, spilling over the boundaries in multiple directions that reveal to the reader the significance of the social, the political, and the historical in any artistic endeavor.”


The quotes in this essay were taken from my notes. Although I have verified these quotes as carefully as possible, the sources of many quotes from the early days of electronic literature are published and republished across platforms, and in the process of moving from platform to platform, variations occur. For instance, the quote from Zork came from my own playthrough of the Infocom Zork I, but I cannot be sure if it existed in the original or not. Even Virginia Woolf’s letters have been variously published. And, as Michael Joyce notes, writing about electronic literature in Of Two Minds, “The pages do not turn alike for any reader”.

Kari Kraus & Matthew Kirschenbaum

Sensorius Ink

Ink runs and soaks, pools and flows, is wet and dry, liquid and paste. It coats and conceals, but also reveals—ink is an agent of change, as Elizabeth Eisenstein once said of one of its most visible metonyms, the printing press. Ink may blemish, redact, or censor, but it also illuminates—recall the hollow reed of William Blake becoming a rural pen that stains the waters clear. It may be delivered by the delicate strokes of a calligrapher or the brute pounds-per-inch of a press. Ink is thus tightly coupled to inscription, to incision and indentation, impression and imprinting. Ink also obeys the laws of gravity, sinking and seeping into whatever lies beneath. Indeed, it encroaches upon and invades its substrates, the catalyst for capillary actions which, like the alveoli or air sacs in our lungs, draw ink’s suspended pigments and particulants right up against the membranes it then penetrates to bind with its host. Look closely at some ink impressed into paper (most any paper and ink will do): You won’t see separate surfaces, you’ll see coagulation and contamination, unclear boundaries and indeterminate edges.

It may seem strange to begin a commission on the Future Of Text with a meditation on ink. Shouldn’t this be about pixels and plasma, retinal displays and Google glasses? At the very least, shouldn’t it be about the casual electrostatic miracle of the laser printer? But like printers themselves (yet another word that has made the migration from vocation to appliance), ink is now high-tech.

Our focus here is on a class of experimental industrial inks that function as non-digital sensors, or what we have chosen to call sensorius ink. Broadly, a sensorius ink is one that detects and responds to specific stimuli in its environment. Like all sensors, they produce a legible output in response to an environmental input. Thermochromic inks, for example, are widely used in battery testers, beverage containers, and thermometers. They detect heat (the input) and respond with a change of color (output); a photochromic ink, meanwhile, detects light (input) and responds with a change in color (output). Although each is sensitive to a different stimulus they both react with dramatic chromogenic displays.

Our interest in this class of inks is twofold: first, despite their apparent newfangledness, they revive a long and fascinating tradition of experimentation with similar novelty inks that date from antiquity. These “sympathetic inks” or “inks called Sympathetical,” as the French chemist Nicolas Lemery (1645-1715) referred to them, were understood as functioning in ways analogous to the medical sympathies of the human body, in which pain, disease, or injury afflicting any one organ or system can physiologically induce similar changes or effects in another (Macrakis 68). Likewise, the thinking went, a sympathetic ink was not insulated from its environment, but rather responsive to it. Such sensitivity enabled the ink to echo and reflect--to sense and sympathize with--its surroundings.

Second, they are a paradigmatic instance of what Stacy Kuznetsov and collaborators call low-fidelity sensors, by which they mean sensors whose outputs are imprecise rather than precise; or qualitatively rather than quantitatively expressed and rendered. Kuznetsov et al. recommend looking to nature for models of low-fi sensors, citing examples such as the behaviour of bees, which can signal drought conditions, or the color of a hydrangea, which can roughly indicate the pH range of the soil (231-232). These examples illustrate that lo-fi sensors require expertise, sensitivity, and awareness--tacit knowledge, or what Kuznetsov et al refer to as a “zen feel”--in order to detect and interpret them (234). Even recognizing them as sensors in the first place requires “new ways of ‘seeing’ ” (234).

The low-fi properties of sensorius inks have historically allowed these sensors to escape the censors. In WWI, for example, spies crossed enemy lines with secret messages written on their skin in invisible ink that could only be revealed by applying an appropriate reagent, such as lemon (Smith 16-17). Such old-school steganographic techniques are currently being compiled and revived by artist Amy Suo Wu, who sees them as newly relevant in an age of mass surveillance (Morley). Her work can be interpreted in part as an analog extension of the digital obfuscation methods advocated by Finn Brunton and Helen Nissenbaum. In a slightly different register there is ‘To You’, a limited edition artist’s book of 150 copies made by Yiota Demetiou with assistance from Tom Abba. The 26 leaves of its concertina (“accordion”) binding are treated on one side with a thermochromic ink that renders them solid matte black panels on first inspection. The book needs to be touched to be read: the warmth of the hand activates the top-coat of ink which appears to dissolve, revealing a printed text beneath. The interaction speaks to the intimacy of reading, and the embodied relationship between every book and its reader. Here ink is sensorius, but it is also sensuous.

Like all inks but more so, sensorius ink blurs the boundaries between environment and information, between inscription and instrumentation. Its status as a site of industrial research together with the appeal it holds for artists and media workers reminds us that any imagining of the Future Of Text which consists solely of digital prognostications is impoverished and indeed politically and environmentally naive.

For the curious, we have documented several of our own proofs of concept carried out using a combination of printing methods at BookLab at the University of Maryland to further illustrate the possibility space. These examples are available at:


Katie Baynes

Words That You Can Feel

It's easy to take for granted the malleability and scalability of the human mind, but now having a nearly two year old child, I've come to see the mind and its ability to consume information through a new lens and light. 

We have a tapestry that hangs on the wall along the staircase. Its huge, the size of a floor rug that could cover a nursery floor. The previous owners of our home left it behind and we liked it enough to keep it.

Often when I’d carry our infant who was around a year old upstairs, she'd quietly point at the tapestry. Whenever I’d notice her hand gesture I’d tell her “that’s a tapestry” or “tapestry”. Simple enough. We'd sometimes reach out and touch it, or talk about the pictures in it. She stopped pointing at some point and I didn't think of it any more. 

Then one day around 17 months, once she was beyond the babble baby communication phase and starting to say a few syllable words, we were walking up the stairs, and she proactively said to me: “tap-e-stry” as we passed the hanging rug. Like most first-time parents, I was of course in awe of and impressed by my child's sophisticated speech. It was a pretty big leap! I had forgotten to mention the story to my husband and yet the following day my he excitedly told me her "most esoteric" new word: tapestry. 

And I, of course say, I know, I taught her! But he says no, I taught her! And then he explains how they would walk by the tapestry and she would silently point and he’d tell her what it was. 

And then we both realized we didn’t teach her, she taught herself. Every day she asked us the same question. Silent, but still a question. With her little index finger, like a computer cursor, pointing at the big thing on the wall that she didn’t know, she was asking: what is that? And we’d answer, and she’d listen, and when, months later, she finally learned to speak, she said the word aloud herself.

She's still young, so in the earliest phases of language development, but now she can sing the ABC song and soon she will be reading the letters and learning the words. And all the while, she will be pointing and asking and we will be answering. 

Imagine if we could take that tapestry language experience and apply it to the printed word. Where you could create text that a young learner could not only read with their eyes, but hear, feel.... or smell, or taste? I've always been hard of hearing and relied heavily on the written word my whole life, preferring books to television (though now I enjoy the abundance and quality of closed captioning on all my streaming networks). I self-taught many words through reading - to the point where it took me saying "cha-os" out loud to finally connect the dots and realize that was the same word I'd heard -- kay-os. 

What if the word chaos felt chaotic. An earthquake actually s h o o k. A heartbeat THUMPED. Imagine what the mind could learn then? Activating all the senses with text; the digital and written world could be brighter, louder, stronger than ever before. 

And we could start this in early childhood so that the sensations became innate. So that the imagination was activated by text in similar ways as a truly sensory experience - like rolling down a hill on a summer day or standing in a rainstorm. As text is today, I can look at black word on a white background and dream up stunning visuals to suit, but what if the words took you further to begin with. Then imagine the platforms the imagination would be able to jump from. Then text could be....alive.

Keith Houston


It started with a heart, so the story goes. Emoji’s founding myth tells that telecoms operator NTT DOCOMO, at the height of Japan’s pager boom of the 1990s, removed a popular ‘’ icon from their pagers to make room for business-oriented symbols such as kanji and the Latin alphabet. Stung by a backlash from their customers, in 1999 NTT invented emoji as compensation, and the rest is history.

Only, not quite. It is true that in 1999, NTT used emoji to jazz up their nascent mobile internet service, but emoji had been created some years earlier by a rival mobile network. It was only in 2019, twenty years after emoji’s supposed birth at NTT, that the truth came out.

Does it matter, when considering the structure and transmission of text, that for two decades our understanding of emoji’s history was wrong? Safe to say that it does not, but it serves as a salutary reminder for those who care about such things that emoji are slippery customers. And now, having colonised SMS messages and social media, blogs and books, court filings and comics, these uniquely challenging characters can no longer be ignored.

* * *

As with essentially all modern digital text encodings, emoji lie within the purview of the Unicode Consortium. Almost by accident, what was once a head-down, unhurried organisation now finds itself to be responsible for one of the most visible symbols of online discourse. And, unlike the scripts with which Unicode has traditionally concerned itself, emoji are positively alive with change. Almost from the very beginning - that being 2007, when Google and Unicode standardised Japan’s divergent emoji sets for use in Gmail - Unicode has been on the receiving end of countless requests for new emoji, or variations on existing symbols. (Of note has been a commendable and ongoing drive to improve emoji’s representation of gender, ethnicity and religious practices.) Thus “emoji season” was born, that time of the year when Unicode’s annual update has journalists and bloggers scouring code charts for new emoji.

And therein lies a problem: emoji updates are so frequent, and so comprehensive, that it is by no means certain that the reader of any given digital text possesses a device that can render it faithfully. The appearance of placeholder characters - ‘’, colloquially called “tofu” - is not uncommon, especially in the wake of emoji season as computing devices await software upgrades to bring them up to date. Smartphones, which rely on the generosity of their manufacturers for such updates, are worst off: a typical smartphone will fall off the upgrade wagon two or three years after it first goes on sale, so that there is a long tail of devices that are perpetually stranded in bygone emoji worlds.

* * *

If missing emoji are at least obvious to the reader, the problem of misleading emoji is not. Although Unicode defines code points for all emoji, the consortium does not specify a standard visual appearance for them. It suggests, but it does not insist. As such, Google, Apple, Facebook and other emoji vendors have crafted their own interpretations of Unicode’s sample symbols, but those interpretations do not always agree. In choosing an emoji, the writer of a text may inadvertently select a quite different icon than the one that is ultimately displayed to their correspondent.

Consider the pistol emoji (), which, at different times and on different platforms, has been displayed as a modern handgun, a flintlock pistol, and a sci-fi ray-gun. (Only now is a consensus emerging that a harmless water pistol is the most appropriate design.) Or that for many years, smartphones running Google’s Android operating system displayed the “yellow heart” emoji () as a hairy pink heart – the result of a radical misinterpretation of Unicode’s halftone exemplar – that was at odds with every other vendor’s design.

These are isolated cases, to be sure, but it is perhaps more concerning that Samsung, undisputed champion of the smartphone market, once took emoji noncomformity to new height. Prior to its most recent operating system update, Samsung’s emoji keyboards sported purple owls, rather than the brown species native to other devices (); savoury crackers rather than sweet cookies (); Korean flags rather than Japanese (); and many other idiosyncrasies.

Today, most vendors are gradually harmonising their respective emoji, while still preserving their individual styles. (Samsung, too, has toned down its more outlandish deviations from the norm.) But though the likelihood of misunderstandings is diminished, it is still impossible to be sure that reader and writer are on the same page: with emoji, the medium may yet betray the message.

* * *

Finally, and as absurd as it sounds, is the prospect of emoji censorship. From 2016 to 2019, for example, Samsung devices did not display the Latin cross () or the star and crescent (). These omissions had mundane technical explanations, but it is not difficult to imagine more sinister motives for suppressing such culturally significant symbols. In fact, one need not look far to find a genuinely troubling case. Starting in 2017, Apple modified its iOS software at China’s behest so that devices sold on mainland China would not display the Taiwanese flag emoji. At the time of writing, as protests against Chinese rule rock Hong Kong, ‘’ has disappeared from onscreen keyboards there, too.

In this there are echoes of Amazon’s surreptitious deletion of George Orwell’s 1984 from some users’ Kindles because of a copyright dispute. A missing emoji might seem like small fry by comparison, but it is every bit as Orwellian: is a text written in this time of crisis devoid of Taiwanese flags because the writer did not use that emoji, or because it had been withheld from them? The case of the missing ‘’ shows how emoji, often derided as a frivolous distraction from “real” writing, can be every bit as vital as our letters and words. We owe it to them to treat them with respect.

Keith Martin

Blame Your Tools

A good worker knows when to blame their tools. Yes, I know that’s not how the saying normally goes, but I have a serious point: the tools we use influence the work we do. If the tools aren’t up to the job at hand, the end result will not be as good as it should have been.

There’s another saying that’s relevant here: if what you have is a hammer, every problem looks like a nail. When I test typeface designs I sometimes use pangrams, phrases that contain every character in the alphabet, to see how things set. One of my favourites is “an inspired calligrapher can create pages of beauty using stick ink, quill, brush, pick-axe, buzz saw, or even strawberry jam.” It’s a fun phrase, but as an actual claim it’s borderline nonsense.

Whatever you do, it will generally be easier if you use a tool that is (1) appropriate to the task, and (2) designed to fit your needs as the user. This is a User Experience concern, something that’s well known in the software development world. Which is why it is so peculiar that Microsoft Word is such a terrible example of UX design. Or perhaps I should say such a great example of terrible UX design?

Word processors are one of the oldest kinds of application and they are, other than web browsers, the most ubiquitous. Which makes it so strange that the world’s most used word processor is so incredibly clunky and opaque.

Most of us turn to the same tool when it comes to writing: Microsoft Word. But there are very few of us who claim they actually like the software; we muddle along, misusing it more often than not because it is so *£@%! hard to find the right features and keep things under proper control.

In case you didn’t know it, one of the secrets to mastering Word is to use styles. If you don’t use styles in Word, you’re spending far too much time trying to keep thing under control. But then if you DO use styles in Word… you’re spending far too much time tring to keep things under control. It’s not so much a Catch 22, more just damned if you do, damned if you don’t.

The thing that hardly anyone realises is that Microsoft Word is not really aimed at writers. The majority of people who use Word have a primary job function that includes having to do some writing, yet where writing isn’t the core of their work. That in itself wouldn’t be a problem, but the way the software has evolved over the decades is because its developers feel the actual needs of most Word users aren’t the same as the needs of professional writers. That, by the way, is a direct quote from Rick Schaut, Principle SDE, Microsoft Word, in 2004 [1]:

“The needs of most Word users aren’t the same as the needs of professional writers. A great example of this is the word count feature, over which reviewers like Adam Engst, who happen to be professional writers, have been knocking Word for quite some time. Most Word users don’t really care about word count.”

— Rick Schaut Principle SDE, Microsoft Word

It’s important that we don’t take words out of context, but looking at what he was talking about doesn’t make it seem any better to me. Schaut was complaining that journalists – a class of professional writers – fuss about things like slow word count, something that Word’s ‘more common users’ aren’t worried about. But there’s a problem with this...

There are in effect three kinds of Word user: those in business roles, those in education, and those writing professionally. Business writers are most likely to be tasked with writing (for example) x number of pages, whereas academics, students and virtually all professional writers – two out of the three kinds of users – will normally write to a specific word count. Whether it’s a 500-word news article, a 5000-word essay or a 50,000-word manuscript, it’s how we work. We do care.

Curiously, I heard exactly the same ‘nobody really cares about word count’ complaint from the Microsoft Office for Mac product manager ten years earlier, in the mid-1990s. At that time the only way to count was drag down a menu, go to a submenu, open a modal dialog that displayed the count, then click Okay to get rid of it. In my efforts to explain journalistic needs to him I built a basic word processor that did one job: counted words live as I typed. It was minimalist but it did all I needed for writing articles. (I called it ‘Wordless,’ but I didn’t mention that detail to him.) After two years of this back and forth chat, at the launch of Office 98, he told me I’d like the new Word as it had the live word count feature I’d been banging on about for so long. Unfortunately it also had Clippy the annoying digital assistant – one step forward, two steps back!

Anyway, that’s just one example among many. The bigger point here is that Word is used by millions of people every day to organise their words, their thoughts, but it isn’t designed to help them do that. Instead, as Charles Stross once said [2], Microsoft Word is broken by design. What we use to organise our thoughts influences how we think, far more than we realise. That’s a simple logical consequence, and it means we are using broken tools that we don’t even like to shape our thoughts. Given this, there can be only one logical conclusion: stop using Word. Our tools should be elegant and fit for purpose, so find something different, something that makes writing easier. Not just a clone of the Office suite, something actually designed for the needs of writers. Blame your tools, then do something about it. Your thoughts, your words deserve it.


  1. Rick Schaut on ‘the needs of most Word users’
  2. Charles Stross, Why Microsoft Word Must Die

Kenny Hemphill

A Recipe For A Healthy Future

People don’t read anymore. We have the attention span of a new-born goldfish and anyone trying to use text to communicate in anything more than a single sentence soundbite is wasting their time. That’s what we’re told. And that would seem to be the message from research recently published in The Guardian that showed while many of us spend hours every day scrolling through text on our phone, our engagement with that text is almost non-existent.

Does that mean text has survived beyond its useful life? That we should give up on it and embrace other forms of communication instead? I don’t think so. While the way we communicate using text must and will evolve, text itself is still hugely important. Evidence? How about those auto-playing videos on your social media feed that have subtitles so you can follow what’s going on without having to plug in earphones or disturb other people close by? Or the oh-so-funny GIFs and memes that rely on text to make their point. Or the newspaper headlines on supermarket newsstands that are still a huge influence on the opinions of large sections of the population.

How can text evolve to retain, or even increase, its relevance? By having less of it. As someone who spends most of their working day reading and editing other people’s text, I see examples hourly of copy that is assembled with no thought for the reader or the demands on their time. Phrases like ‘in the month of’ or ‘at the present time’ or ‘remains to be seen’ abound. Strings of letters that do nothing to aid comprehension, but are a comfort blanket for the writer. They have no place in the bright, bold Future Of Text. They suck the life out of a sentence and the enthusiasm from the reader. Eradicate them, I say. Take a cleaver to your copy and hack great chunks of it away. Trust me, it’s liberating. Treat your reader and their time with the respect they deserve and put only the words in front of them that are necessary to deliver your message. Make your sentences short, your clauses few and banish semi-colons forever. Embrace bulleted lists and sub-headings. Cut, cut and cut some more.

Imagine your reader standing on a subway platform reading your copy on their phone as the train pulls in. That’s how long you have their attention for. Make it count and they’ll come back for more. Fail and your words will be forgotten before the train doors close.

None of that means that long-form text has no future, of course. But long-form articles, novels, or academic papers aren’t an excuse for ill-discipline. Respect for the reader is critical to the Future Of Text. After all, if we human writers aren’t able to show respect for our readers, how do we teach the AI bots and machine learning algorithms that will construct much of the text we read in the future to do it?

Ken Perlin

When Text Will Be Everywhere

Text has become fundamental to human communication. To be sure, it is a relatively recent development, having existed for only a few thousand years. Yet text seems like a logical outgrowth of our greatest shared biological heritage: our genetically shared ability to learn and evolve spoken language.

The advent of text imparted a new level of persistence to the fruits of language, allowing people to pass down their thoughts and wisdom through the generations in a way that was more robust than oral traditions.

But that wasn’t all. A text document could also be searched and archived in a way that spoken communication cannot. This quality of searchability has brought with it some wonderful consequences: If you have text, then you can have libraries, thereby allowing culture and wisdom to not only be amassed, but also to be studied and analysed in deep and original ways.

Right now we are in an interesting time in the history of text. Thanks to the imminent advent of wearables, computer screens will soon be disappearing from our homes, offices and public spaces, to be replaced by something fundamentally new -- and fundamentally better.

Technologically enabled dramatic shifts in culture and communication are not new -- they have been with us for many centuries. One notable example was the invention of the rotary printing press around 1870, which enabled the “dime novel”, and therefore promoted mass literacy in many parts of Europe and the Americas. Another was the Web, which allowed anybody to publish their written thoughts and ideas to the entire world, without requiring permission from a publisher. Yet another was the SmartPhone, whith put the distributive power of the Web into everyone’s pocket.

And now we are seeing the transition from SmartPhones to wearables. Sometime in the next five years or so, that transition will be largely complete. When that happens, we will be able to make thext instantly appear, wherever and whenever we need it. Signs that we see in the world will be dynamic and every changeable, and can be customized to each individual person looking at them. Translation of text between languages will be automatic and instantaneous, and will be able to appear in different ways to each person.

More fundamentally, text will become embedded in our 3D physical world. It will become a fundamental part of architecture, of work, of play, of the living of life itself.

In the near future, people will put on a pair of cyber-glasses before they leave the house in the morning When they look at a tree or flower, or a beautiful building, they will be able to know instantly, should they so wish, its particular genus and variety, its age and country of origin, without needing to make a clumsy gesture like taking out their phone and pointing at the object in question.

To me the most fascinating thing about this evolution of text is that we don’t yet know what we don’t know. Imagine, by analogy, that someone had tried to explain Google to you in 1992, one year before the first practical Web browser. You probably wouldn’t have quite understood what they were talking about, let alone understanding the need for such a thing.

Similarly, if someone had tried to explain Uber or Lyft to you in 2006, a year before the first practical SmartPhone, it would have seemed like purest science fiction. So many parts of the eco-system for such services require everyone in the loop to have a SmartPhone.

Similarly, there will be future ways of being that take for granted a world in which text is ubiqitous in the world around us. Fundamentally new modes of communication will arise from this ubiquity.

I cannot tell you what those new modes of communication will be, any more than one could have predicted Uber in 2006. But I’m incredibly excited to find out.

Leigh Nash

The Way Forward Is Backward, And Then Forward Again, And Then —

I published my first book when I was six years old, in Grade One, back in 1988. We bound small books with tape and wallpaper scraps; we printed stories on the inside pages in pencil, illustrated them, and then read them out loud to our classmates.

Now, in 2019, I work as an “old-school” book publisher, meaning I invest in others’ stories: I pay advances and royalties, undertake editing and printing and distribution, and send authors on small tours to read to audiences of varying size.

Not much has changed in three decades, even as technology has pulled bookmakers forward by the ears. Print books are still more popular than digital books. The audiobook is poised to be the next breakout star for the reading/listening masses. With the reading of text aloud to others, we’re looking to a medium older than the printing press and the novel to bring us into the future.


I grew up using computers; my father worked in in sales, and in programming. Later in life, he fell in love with ebooks and read voraciously up to his death. My father taught me to love reading and printed books when I was young, and he softened my initially hard stance toward digital reading. I publish books now in as many formats as I can afford to; print books and ebooks primarily, because, being the “new” technology, audiobooks are still expensive to produce.


Text is story, and story is memory and knowledge. A publisher is an intermediary between two people having a conversation—the reader and the writer—and an intermediary in the sharing of stories. It is our job to produce accessible texts for all potential readers, whether that’s print, digital, audio-visual, or an as-yet-unknown medium.

Publishing texts in any form remains a radical act; it is an act borne more of a love for communication and connection than commerce. I still have that first book I bound and wrote, and the first book of poetry my father read aloud to me. But I don’t remember the first ebook I ever read, nor do I remember the first audiobook I listened to.

I’ve grown up on the brink of the shift from analog text to digital text. Nostalgia keeps me connected to printed matter, to paper, ink, glue. But I also feel nostalgia toward DOS command screens and text-based computer games. Will I ever feel nostalgia for audiobooks?

In another thirty years, I’ll still be able to crack open the poetry collection handed down by my father; even if I did own a copy of that first ebook I read, would technology of the day allow me to open it? The presentation of text now almost feels beside the point of the communication itself; the Future Of Text depends on its resonance, its ability to meaningfully connect speaker and listener, however that may look.

Leslie Carr

The Future of Text

The past of text was mediated, curated and limited. According to a 2015 Pew survey, 72% of adult Americans had read a book in the previous year with four being the median number of books read. For each person that represents four annual decisions that a text is worth reading, four decisions that an author is trustworthy and four attempts to extend the reader’s cultural or educational world.

Unsurprisingly, as the Web brought more forms of text to the attention of readers, people began to panic about information overload (access to too much information to be able to cope with individually) and then filter bubbles (access to not enough information to fairly represent all ideas globally).

The present of text has become radically decentralised. Facebook, Twitter and other social media companies have billions of daily users generating 500m tweets per day and uploading 100m Instagram photos and videos. The staff of a national broadcaster such as the BBC may have produced 5 million tweets over the last decade but its audience retweets, comments and responds 1 million times per day. The present experience of text is now billions of authors, each writing in the context of national and international conversations prompted by shared news stories and hashtags and viral media exchanged between of hundreds of their followers and friends.

This mass hypertext capability is amazing, but it is not the epistemological utopia that hypertext pioneers anticipated. The present of text has become an urgent problem of incomprehension, uncommunication, malignant disinformation and political polarisation.

Social media platforms such as Twitter create a network of texts generated by a network of users. Tweets are not independent texts, but neither are they collaborative. Twitter accounts may represent the public thoughts of a private individual, the curated communication of a public persona, or the official communication of an institution. They may be created by a computational bot, a political activist, a bad-faith actor or a state-sponsored disinformation campaign. Tweets are generated in the context of, and in response to, the existing network of tweets plus other social media platforms and the rest of the web. The network of users extends itself as previously unconnected users become aware of each other’s tweets and explicitly follow each other. Different accounts play different roles in the social platform; they have different levels of engagement, different levels of activity, they accrue different audiences and instigate different responses. Those accounts with a high number of followers have a broad reach for all of their messages and are likely to achieve a greater number of likes, responses and retweets for any statement that they make.

Any specific statement read on Twitter then, needs to be understood not just in terms of the language that it contains, but also in terms of the history of the tweets that came before it, the tweets that it responds to and the role, significance and alignment of the account that made the tweet. Any collection of tweets can be examined in terms of the language of the tweets, the identities of the accounts involved and the changes in those properties over time.

To tackle the current problem of “fake news”, our social apps need to be substantially upgraded. Rather than make four reading choices per year, readers are being asked to make dozens of trust judgments per minute. Every time they read their social media timeline they need provenance support from AI and data science algorithms – network analyses, time series analysis, natural language processing, topic modelling and sentiment analysis.

The future of text is reading between the lines.

Lesia Tkacz

The Quantum Physics Of My Imagination, Language Technology, And A Future With Creative Text Generation

There are similarities between language technology and quantum physics[1] which I keep turning over in my mind. The turns I will take in this text eventually lead to a critique on how language technology is dominated by corporations. I also offer the emerging form of creative text generation as a means of renegotiating how we want to use language technology. While I’ve studied language, I am not a physicist, so I can only draw from the quantum physics of my imagination. Nevertheless, I find it interesting that classical physics can be relatively easily observed in physical space, and that its laws are well understood. However, when forces are observed on the minuscule sub-atomic scale, the unfailing laws of physics no longer apply. Another world has been discovered which does not bend to our sensible rules, and which cannot relate to anything we currently understand. I imagine that research physicists are often baffled by what they observe in the quantum world, and are under the constant challenge of having to understand what their observations mean, and how they could be applied to human scale, or, the familiar physical space that we occupy. Physicists can perform observations and applications on a massive scale by using particle colliders — the largest machines ever built. These machines are supported by globally distributed high performance computing networks, by data centers which can draw as much power as a town, and by international communities of scientists and engineers.

Drawing a parallel to quantum physics, I am similarly interested in how language appears to behave differently at extreme scale. This is in contrast to what we have come to expect from language on the human scale, where written and spoken communication between people is used, and from which linguistic rules are derived from. For example, linguists have for decades been comfortably studying language on the atomic level (at the scale of the morpheme and phoneme), up to the larger view of how it is shaped by time2 (diachronic or historical linguistics). But language has been seen to behave in weird, unrelatable, and virtually unexplainable ways when it is observed at colossal speed and scale through the lens of machine processing. This machine perspective on language has the effect of refracting it in bizarre ways. The workings and results of some language technologies are arguably poorly understood, and they do not always fit with how we currently understand and actually use language as human writers, readers, speakers, and listeners. The technology therefore seems outlandish when it processes millions of observations and harvested instances of people’s actual language use, in order to calculate strings of morphemes, to vectorize and plot words, mine opinions like valuable natural resources, forecast phrases, measure sentiments, and when it routinely attempts to predict the future. It is almost as if we are in the midst of discovering a quantum physics of language, where the properties and behaviours of language at extreme computational scales and speeds are governed by radically different and largely unknown forces.

Like the forces and behaviours researched by quantum physics, language technology research requires specialized tools and infrastructure for gathering and processing data, for development, and for conducting experiments. For both fields, the infrastructure they use can be on the industrial scale. The Large Hadron Collider[3] particle collider is 27 kilometers in circumference. Google’s land portion of the Tahoe-Reno Industrial Center[4], which is slated to become a datacenter, is almost 5 square kilometers. However, where physics research infrastructure is typically shared by public institutions, some of the most powerful infrastructure for language technology is owned, rented, operated, and priced by the dominant technology corporations. Among others, these include Amazon, Google, and Microsoft. Researchers with access to a university’s high performance computing infrastructure cannot compete with a multinational corporation’s gargantuan processing power or resources. Researchers do not have the funding to purchase it, nor can they expect to learn or test how proprietary language technology beats current research benchmarks. This is extremely problematic for the Future Of Text; the power imbalance means that technology corporations can dominate end user digital spaces, by cornering networks and markets with their presence and industrial capital. The web is one such digital space. The power imbalance risks the diversity of the digital ecology on both the human scale and on the computational scale. This is because it creates a monoculture[5] of language technology, and dictates how it should be used.

For this reason, the wonderful language and computational literature experiments which are proliferating in web culture, are valuable to diversifying the digital ecology... Examples include Twitterbots, travesty generators, themed predictive keyboards, neural network humor, and computer generated novels and poems. I refer to all of these as forms of creative text generation[6] because they prioritize creativity over functionality, and are therefore not constrained to strict functional requirements. At least for the time being, those who create generated text works are free to select their own constraints, free to use whatever language technology they are able to in any way they wish, and for their own purposes. Whether the work’s execution is considered to be state-of-the-art or simple and archaic, ingenious or silly, successful or failed, the point is that individual creators are working out for themselves how language technology could be used and for what, and they are populating the digital ecology with their creative projects. My vision for the Future Of Text is to see creative text generation being used extensively by small groups and individuals for their own benefit, experimentation, and creative enjoyment. Such use can help digital text to have a negotiable and evolving future in more people’s lives. Otherwise, the Future Of Text technology risks continuing upon a solely corporate path, where it cannot be influenced by other forces which can help to direct and shape it.


  1. I would have loved to make a joke about how the unexpected results from language processing experiments were caused by ‘spooky action at a distance’. Or, in the case of LDA topic modeling, perhaps spooky action at a distant reading.
  2. Here belongs a reference to either the fourth dimension or spacetime.
  3. The Large Hadron Collider is the largest particle collider in the world. It is located in Geneva, Switzerland.
  4. Tahoe-Reno Industrial Center is located in Nevada, USA. It is the largest industrial park in the country.
  5. Whether it’s a monoculture of technology use and purpose (as discussed here), of crops (agricultural), or of practiced culture and language (linguistic), a monoculture can have negative if not detrimental effects on its ecology or population.
  6. With this term I mean to include both combinatorial and generated text works, whether they are human edited and curated at some point or not.
  7. I define functional text generation as reports which are computer generated from weather, sports, elections, finance, and health data. Unlike texts generated for creative purposes, functional text generation is required to have staunch coherence, factuality, and completeness.

Leslie Lamport


What inspired you to take the route you did originally, why did you create LaTeX?

I had no choice. I was forcibly expelled from the womb.

— Leslie Lamport

As Leslie Lamport wrote in My Writings 15 October 2019:

In the early 80s, I was planning to write the Great American Concurrency Book. I was a TeX user, so I would need a set of macros. I thought that, with a little extra effort, I could make my macros usable by others. Don Knuth had begun issuing early releases of the current version of TeX, and I figured I could write what would become its standard macro package. That was the beginning of LaTeX. I was planning to write a user manual, but it never occurred to me that anyone would actually pay money for it. In 1983, Peter Gordon, an Addison-Wesley editor, and his colleagues visited me at SRI. Here is his account of what happened.

Our primary mission was to gather information for Addison-Wesley “to publish a computer-based document processing system specifically designed for scientists and engineers, in both academic and professional environments.” This system was to be part of a series of related products (software, manuals, books) and services (database, production). (La)TEX was a candidate to be at the core of that system. (I am quoting from the original business plan.) Fortunately, I did not listen to your doubt that anyone would buy the LaTeX manual, because more than a few hundred thousand people actually did. The exact number, of course, cannot accurately be determined, inasmuch as many people (not all friends and relatives) bought the book more than once, so heavily was it used.

Meanwhile, I still haven’t written the Great American Concurrency Book.

Livia Polanyi

The Future Of Text

Ever since human beings created their first texts – symbolic representations of language inscribed somewhere, somehow to aid in memory or to communicate their thoughts to other people – how texts are constructed has depended on the means at hand to inscribe the message in terms of both representational conventions and the technology and materials of inscription, the facility of those putting down their thoughts to be decoded later both in terms of mastery of inscription and representational conventions and the immediacy of the intended audience in terms of time, space, nature of their interest, their access to the message and their knowledge of the representational conventions. While once a message might be narrowly delimited both by the simplicity of the inscribing code as in making slash marks to record possessions or to record transaction, once written language was developed, much more complex messages could be assembled by those who knew the code and addressed to those who were also literate. Often, in early times, these readers were people who were known to the writer – another merchant with whom the writer was trading, for example, but soon enough addressing a message to posterity attracted the attention of rulers in both the Old and New Worlds who wanted the glory of their accomplishments to be known way beyond their death. Once printing became commodified and access to messages written on paper became common, letters, books, newspapers, account ledgers and myriad other text media allowed individuals to communicate written materials both to individuals and to groups of known and unknown people. When we consider then what the Future Of Text might be, we must take into account these same factors: how a message can be “written”, who the intended recipients might be and where they are located, and who will be the people who will have access both to the medium for textual transmission and the codes in which messages will be inscribed.

How texts will be created in the future depends, of course, on continued innovation and commodification of means of inscription. I will leave discussions of media and machines to the technologists most immediately involved in creating new means of message production and transmission and confine myself to making a few observations about the writers and the readers who will require both physical and knowledge based means of text production and reception. So, the so-called digital divide then but even more importantly, the most important human divide that separates the haves from the have nots throughout the world: both active and passive mastery of written language. Now, one might argue that with voice recognition and automatic transcription, written language will become obsolete and the divide between the literate and the illiterate will become irrelevant. However, this is clearly not the case. The ability to create complex scientific, technical, legal and financial texts which may require many re-readings, notations and study to understand and respond to can not be done through oral language alone. Referring, cross referencing, comparing the subtleties of phrasing and re-phrasing requires access to a written record. Those who do not have the means or the knowledge to create and understand complex textual records will remain excluded from the information and therefore the power that knowledge of complex reading strategies brings. Universal literacy, therefore, is a prerequisite for the democratization of access to information. Hopefully, then, the Future Of Text will involve the acquisition of the tools of written language production and reception by the peoples of the world who are currently excluded from sharing in text as it is inscribed and used today.

Lori Emerson

“Insanely Great”

For better and for worse, the future of digital computing interfaces is the Future Of Text and surely no better magic mirror exists for divining this future than advertisements for new Apple products.

One of the more recent advertising campaigns began in November 2017 as part of yet another launch of yet another version of the iPad in yet another campaign to vanquish not merely “the competition,” and not merely the computing industry’s sense of what’s possible, but our awareness of there being programmable computers at all. Shamelessly drawing from the prevailing belief that teenagers are the most valuable consumer demographic when it comes to tech, the ad opens with a young person about thirteen or fourteen years old, gender indeterminate, sailing away from a New York City walk-up apartment on his or her fixie and into the free-wheeling world of urban teen hang-outs. From stoops to parks, sidewalks, taco shops, coffeeshops, buses, and even tree branches, we see her (this vibrant young person who appears entirely “of” the modern world turns out to be a “she”) chatting with friends online, drawing hearts on the screen with a stylus, snapping social media appropriate “pics,” and creating mixed media art -- all to the tune of electronic pop duo Louis the Child’s refrain of “where is it you want to go?” The advertisement is only sixty seconds long but already, forty-five seconds in, we know without a doubt that this pre-teen knows better than we do that with this device, not even noticeable as a device, you can go anywhere and do anything -- precisely because it is a perfect extension of her. The ad closes with her lounging in her backyard, iPad resting on the grass as if it’s also as much a part of the natural world as flowers or trees, while she easily and unselfconsciously is immersed in what we can only assume is a magical land on the other side of the screen. With only seven seconds left, this world of magic is suddenly interrupted by a friendly neighbor in her forties or fifties who leans over the backyard fence and asks, “Whatcha doin’ on your computer?” She responds not by explaining what she’s doing or what she’s seeing in the parallel world of digital magic but with what we are supposed to think is an unanswerable question: “What’s a computer?”

I too often despair that this - the continuation of this merging of magic and media and the subsequent disappearance of our writing tools - is the Future Of Text. But in my better, more hopeful moments, I dream of a future for text that wholeheartedly rejects gloss, aura, illusion, deception, foreclosure, invisibility, nature, intuition, seamlessness and instead returns to Alan Kay’s 1970s vision of the computer as “meta medium” that not only provides us with “the ability to ‘read’ a medium [which] means you can access materials and tools created by others” but also gives us “the ability to ‘write’ in a medium [which] means you can generate materials and tools for others. You must have both to be literate.” And we can only write tools, and tools for tools, with an interface that is open, accessible, extensible. Surely this is the true meaning of “insanely great.”


  1. Apple. “What is a Computer?” YouTube, accessed January 20, 2018.
  2. Alan Kay, “User Interface: A Personal View,” in The Art of Human–Computer Interface Design, ed. Brenda Laurel (Reading, Mass.: Addison-Wesley Publishing,1990), 193.

Luc Beaudoin & Daniel Jomphe

A Manifesto For User And Automation Interfaces For Hyperlinking: How Hypertext Can Enhance Cognitive Productivity

Deep knowledge work typically involves interacting, via web browsers and manifold apps, with many pragmatically related local and remote information resources. For instance, when writing a document one typically processes a task list, edits documents (e.g., a draft, outline, figures, notes, spreadsheets), and reviews reference material, notes about them and communications. For peak performance and because information in the brain’s working memory[1] decays rapidly, one must rapidly access such information. Searching for information and navigating folders replaces working memory content with trivial information, breaking flow[2]. Navigating apt links, in contrast, is relatively quick and easy, extending long-term working memory[3] which underpins expertise. It would therefore be immensely useful for any resource, whether local or remote, to easily be linked to any other.

Figure 1. Typical creative project involving manifold related resources

In his Conceptual Framework for Augmenting Human Intellect (AHI)[4], Douglas Engelbart stipulated that in hypertext “Every Object [ must be ] Intrinsically Addressable (Linkable to)” (p. 30). Unfortunately, many of the most popular apps still do not provide a way for users to copy a hyperlink to the resource that is currently open or selected by the user! Yet copying links is arguably as important as copying text. And of those that do, many do not provide an application programming interface (API) to obtain hyperlinks for their data.

These limitations could easily be overcome, which would yield immense cognitive productivity[5] benefits.

Modern hypertext is still in silos

For a resource to be linkable one must be able to get its address and name in a format one can later use — ideally a hyperlink (“link”). URLs are a standard addressing format for web links (see RFC 3986[6]).

Increasingly mobile and desktop apps provide a user interface (“UI”) command to copy an “app link” to the current item. An app link is a link that opens in a local app, as opposed to the web. For example, the OmniFocus app by OmniGroup provides a Copy Link command that yields links of the form OmniFocus://… . Users can then conveniently paste such links in their notes, todo lists, emails, etc.

Modern operating systems like Apple’s macOS 10.14 and iOS 13 enable apps to register themselves as servers of URLs of a specific scheme. For example, OmniFocus can register itself to serve OmniFocus:// links.

Copying links is one of the most important and fundamental hypertext functions. Yet even today, many apps do not provide a UI for getting a link to the current object. Consider three examples on macOS. First, Apple Mail, Microsoft Outlook for Mac, and many of their competitors do not provide a Copy Link function for emails. Yet a URL scheme could be created for it based on the email’s IDs, since every email has a unique ID (per RFC 5322- Internet Message Format[7]). Nor do they provide a command to copy the RFC message-ID or one to open the message by ID. Second, on macOS, there is no UI command to get a URL of a file, let alone a stable one. Many task management and note taking apps, like Apple’s Reminders, do not expose a Copy Link function. Moreover, often, when a Copy Link command is supported, it returns a universal https:// URL that may take the user to a web service (Dropbox v 102.4.431 and Todoist 7.3.2 (12025) are but two examples). Confounding users even more, Copy Link commands are inconsistently placed across apps, often requiring several gestures to access.

Hyperlinking usability requirements for OS and app developers

To provide the benefits of links to their users, developers should provide a Copy Link command for their apps' data. For apps that store data locally, it must normally be possible for the user to get either an app link or a locally resolvable universal link[8]. For software whose data is both accessible via a web browser or other app, it should be possible and easy through a setting or gesture for users to choose whether any (and all) link(s) of a given scheme is(are) served via a web browser or the app.

OS vendors should provide guidelines for the Copy Link command to be presented in a specific location. On macOS, it is most commonly in the Edit menu. Users must also be able to assign a keyboard shortcut to this command globally and per app.

Link API (automation) requirements for OS and app developers

To enable their users to create, extend and navigate networks of information that includes the app data they generate, apps must provide APIs to :

  1. return a link (name and URL) of the current selection (for document-based resources, a parameter may specify whether a deep or document-wide URL is returned);
  2. open or reveal the item at such URLs; and
  3. for apps that create documents or data: to create an item of the specified name (optionally in a given context) and return the new item’s URL.

It would not be realistic, however, for this manifesto to impose or seek a standard API or schema for link data beyond standard URL syntax (RFC 3986). Whether the APIs are command line, JavaScript or other matters not to this manifesto, so long as they are professionally implemented and documented. The APIs would allow developers to develop truly universal bookmark and link managers.

Hook Productivity: A working proof of concept of universal context-sensitive bookmarking, linking and information navigation

For linking operations to be instantly accessible they should be presented by an app that interacts with all linkable apps. Enough macOS apps already support linking APIs to render possible this new type of software: truly universal (URL-scheme agnostic) context-sensitive bookmarking and link management. We have developed such an app (on macOS and coming to other OS’s), Hook productivity[9]. The contextual resource is defined as the resource that is currently open or selected in the foreground app (such as a web browser, email or task management app). Hook provides consistently located linking commands, keyboard shortcuts and automation to:

1. simultaneously copy and bookmark a link to the contextual resource,

2. “hook” (bidirectionally link) two resources together,

3. simultaneously create, store and name new items in the app of the user’s choice, while hooking the new item to the contextual resource,

4. navigate the network of information hooked to the current resource, or other resources, whether local or remote,

5. search for bookmarked items (escaping contextual resource), and

6. more.

Hook defines a URL sub-scheme (hook://file) for files, robustly serving such URLs even if the file has been moved, and for emails (hook://email, based on their RFC ID) usable by several email clients; its links are shareable enabling recipients to access sender-referenced local copies of emails and files. When the contextual app defines its own URL scheme, Hook’s Copy Link function normally returns the app’s URL.

Figure 2. Universal link management functionality, provided by Hook, built on modest linking APIs

Hook represents a significant step towards demonstrating and achieving the value of Engelbart’s hyperlinking vision, transcending the limitations of web browsers that he lamented.


Enabling users to rapidly address and navigate information across apps and services enables them to be cognitively productive: remembering, understanding, analyzing, applying, synthesizing and creating solutions, knowledge and other products.

We therefore strongly encourage all developers to ensure their software provide APIs and easily accessible user interfaces for getting and serving links to their data.


  1. Working memory:
  2. Flow:
  3. Extending long-term working memory:
  4. Conceptual Framework for Augmenting Human Intellect (AHI):
  5. Cognitive Productivity:
  6. RFC 3986:
  7. RFC 5322: Internet Message Format:
  8. Universal links:
  9. Hook productivity:

Manuela González Gómez

Does Handwriting Have A Future In The Digital World?

The world is changing and so is our written communication. These days, most of it happens via text messages on mobile devices or is tapped out on a keyboard. In a society where there is a premium on immediacy, this form of writing has the advantage of speed, clarity of text and ease of correction. This means that handwriting is used less and less, even to the extent that voices are already beginning to be heard that question the value of learning it.

Is this the beginning of the end for handwriting?

From its invention to our time is a brief period in evolutionary terms, and yet in that period it has undergone great changes. If we look back, we see that its progress has not been linear, but rather there have been points when it was thought it might disappear. For example, the invention of printing saw predictions of its demise; and what actually happened was that its use increased, because people began to read – and thus to write – more due to the greater availability of books. Later it was also threatened by the advent of the telephone and typewriter, and although its use diminished, it did not disappear.

Today we have the paradox that, as digital technology renders it devoid of practical utility, that very technology is for its part also proving to be an ally in helping to expand its horizons into other fields, such as for example calligraphy and graphic design. In the case of calligraphy, there is a growing interest in this graphic art activity, perhaps because in an ever more technological and globalised world, calligraphy is a kind of rejection of uniformity and a reaffirmation of human individuality – as well as a tool for relaxation and creativity, as it increases neuronal activity in certain areas of the brain in a similar way to meditation. This rebirth is not just confined to its more traditional expression on paper, but can also be seen happening on digital devices.

The latest advances are also helping to expand the role of handwriting in design and branding as a means of personalising a brand and giving it added value, making it more authentic and unique and thus creating an emotional bond or connection with the client.

Handwriting, on paper or digital media, is good for keeping the brain in shape and for learning, activating large regions of it responsible for thinking, language and working memory. And in one way or another, it could continue to play a role in the society of the future given the challenges that will have to face. Most importantly, perhaps, human civilization is going through a period of rapid technological change and its future will depend on the control we can establish over increasingly intelligent machines. The aim, therefore, will be to ensure that those machines understand humans and remain subject to them, so that our civilization thrives. This also implies that the machines should know our history and, in that context, what the invention of writing as a way of communicating and conveying knowledge, thoughts and ideas meant for human beings. They should know that handwriting is an integral part of our culture and something that characterizes the person and differentiates us as unique individuals, as no two hands are the same. They should know that it has its gestures, its space, its rhythm and its harmony, and that the union between letters is a kind of metaphor for the need that human beings have to connect with each other; to communicate.

In this way, artificial intelligence might prove transformative for human beings – resolving, for example, the communication problem that has existed since the first people dispersed over the different continents giving rise to many different languages. It could, perhaps, help create a global means of communication to complement existing natural languages, in which handwriting could have a role. Advantage could be taken of the fact that handwritten symbols, independently of the alphabet and social conventions, are very expressive in a human and gestural sense, thus making for easier connection with others and leaving a clear imprint on their memories. It would involve creating something that is not exclusive but inclusive: something that unites us as individuals and maybe, in some distant future, something that reminds the beings that exist then that they are descended from humans.

Marc-Antoine Parent

Perspectives And Overview

One key function of language is to coordinate action, and the underlying perspective that justifies action. Perspectives emerge from a lifetime of past experiences and exchanges, are incredibly rich and complex, and may or may not be self-consistent. Funneling that tangled web of ideas into the linear medium of language (written or otherwise) is a difficult art for a single person; it’s even more difficult when a community tries to describe its own evolving shared understanding. In that case, there are two conflicting aims: expressing the common view that has emerged so far, clearly and concisely, while also displaying the full diversity of viewpoints, particularly around contentious issues or the expression of still emerging, unformed ideas.

Text in particular, which is meant to endure in time, can express a single voice, as seen in stories or polemics; or in treaties that record the end process of achieving consensus. Either way, text is a snapshot of a given moment in the flow of thought.

Yet time never stands still, and enduring communities need to dialogue with the text, hence the long tradition of commentary, that became hypertext when we could rearrange text fragments dynamically; at the same time that forums have given us a new genre: large-scale written conversations. However, conversations are anything but concise, and there is a tension between showing the dynamic play of ideas in a community and offering an accessible synthesis. Wikis are another point in that design space: collective texts continuously refactored towards clarity with sustained effort, yet retaining memory of past edits. When the maintenance falters, fallow pages stop reflecting the evolution of the community.

More important, wikis excel at showing the consensus view, but factions can turn pages into battlefields. Wikipedia developed the practice of separating the contentious conversation on a distinct page, which requires its own maintenance.

There are many other valuable experiments that aim to allow controlled evolution of a collective text, with aspects of moderation, but I would like to propose other directions. First, why should there be a single target text? Mike Caulfield has proposed choral descriptions of ideas; that there could be multiple valid versions of a single text. Ward Cunningham’s Federated Wiki proposes another implementation with divergent page clones. This is reminiscent of software forking, and it is a positive development for expressing diversity of perspective. As in github, divergent paragraphs can also be merged back into any page and retain their individual history.

However, a purely divergent system does not give access to an overview. I propose that pure text can never offer a perspective that is at the same time global, dynamic and intelligible. However, I also believe that it is possible to offer this around text. What we can do is to enrich text with topic anchors in the margins. The topic anchor should give a visual indication of the size and activity level of the conceptual neighbourhood of the topic. In particular, if any topic in the neighbourhood of the text topic is unknown to the reader, the topic marker should indicate so. Hovering on the topic marker should give a list (partial, ranked) of related topics, whether they are alternative views, related issues, or arguments around the topic; and possibly also give an indication of the size of the community following each of those related topics. A separate interface could also provide synoptic graph views of related topics.

Enriching texts with concepts anchors could be done at the moment of writing or afterwards, including on someone else’s text. It breaks the wiki assumption of a single topic per text; sometimes, a key topic in the text is identified only long after it was written. This is a much higher requirement than traditional hypertext backlinks, because for very rich conversations, there will be a large number of related topics, that we would want to group in semantic clusters (or at the very least de-duplicate.) It is also much richer than semantic entity identification, as we may want to identify higher-order semantic entities, such as links between concepts. This could be done by human, artificial, collective or mixed intelligence. Right now, we are defining a data model and event-based protocol for expressing these topic maps and their accompanying conceptual graph, and allowing a mixed ecosystems of information appliances to produce, combine and enrich this global concept graph, and hope to work on experimentation with semantically enriched text.

Marc Canter

Text is the instrument of communication, expression and the connecting bond between humans. Linguistics tells us that language can be personified without text – through body language, visuals and even music/sound.

“Linguists traditionally analyze human language by observing an interplay between sound and meaning.”

— Wikipedia

Since this is the Future Of Text book I will limit my rant to a new kind of tool that takes poetry, written discourse and self-expression into new areas.

We ARE talking about the future – right? This is the future of storytelling that goes beyond just text.

Three trends merging together

I’m going to describe a new kind of entertainment product (called which brings together:

The product is a mobile App which combines text with video, images and language special EFX into an amalgam of conversational storytelling, memes and interactive entertainment. Text is utilized to tell a story, overlay on top of images and video and as the “voice” of a supposed sentient Being – which acts as a proxy for the Creator.

In the evolution of Creativity, online tools have enabled non-technical Creators to forge together interactive possibilities, while leveraging the world wide web and audience engagement. Commenting has not really evolved over 20+ years and brings the audience up to the level of becoming participants in the creative artifact.

Digital Natives utilize their smartphones not just for web browsing, communication or eCommerce – but also as a core means of expression. Whether it be via social media posting, video or image processing and editing or any of a myriad of new “creativity” platforms (such as Tik Tok, Instagram, Twitch or YouTube) – each of these forms of self-expression take social sharing, media and messaging to new heights.

Conversational Messaging is now the mainstream norm of digital communication; with a “sender” of a message stating their message in text (on the left-hand side) and the recipient of the message replying with text on the right hand side of the message thread. This simple interplay is enhanced in with the inclusion of scripted media (images, video, sound) and a semi-autonomous ChatBot (which we call a “Being”.)

Instigate Creators weave a tapestry of fun with media and their Being and then share the results with their friends.

Three-way Conversation

An Instigate Being is really a scripted one-sided conversation where the Being’s creator “anticipates” how the conversation participant will react and respond to various statements, questions, storylines – put forth by the Being and her accompanying media elements.

The Creator crafts a Script, which sequences photos, video and sound/music – to tell a Story – and enhances that Story by building in text and media responses that provide interactive possibilities in the narrative. Storyline branches, questions and answers and “sentient personality” are all possibilities within an conversation.

The Story unfolds vertically inside of a conversational interface, with participants choosing to either directly reply (via text) to any Story element in the conversation or step forward to the next element in the Story.

The result is a new kind of Interactive Narrative which picks up where Instagram stories – leave off.

A three-way relationship is established as the Being Creator builds a Script (which is made up of the Being’s text, media playback and special EFX) and then privately “shares” that Being with a friend, family member or colleague. The Conversation “sharee” converses with the “supposed Being” and the Uncanny Valley is given the finger – as we all know there’s a (wo)man standing behind the curtain – pulling levers and turning knobs.

Script editor

A vertically oriented Story Script editor is utilized to build the Being’s Scripts. Individual media, text and special EFX elements are associated with Script tiles – which can be dragged up and down the Script, cut, copied or pasted and in general – treated like individual Story elements.

Elements can be copied and pasted between Beings leading to a Remix-like environment.

A Home Timeline is provided; which not only holds Tutorial, Example and Showcase Beings for all to use, but also provides a place for Celeb and Brand Beings to spread their messages and products.

The Home Timeline is also the destination for ambitious Creators who would like to complete the aspirational journey of their Beings, by pushing their Beings out into the public eye.

Good news – anybody can create a Being, utilize AI and special EFX to create a compelling conversational storytelling experience.

Bad news – bad actors pollute and infuse publicly shared creative output with their vile and hateful engagement. But have no fear, will provide a mechanism for Creators to “prune out” hateful or unwanted influences that have been “input” into the psyche of their public Beings!

AI is really hard to grasp, control and build

One of the underlying purposes and “problem to solve” that is undertaking is “how can regular people control AI?”

The answer starts off with a gradual unveiling and progression through the product covering all of the major concepts, techniques and methodology required for “training sentient Storytelling Beings.” That progression is provided through Levels of the system and Creators earning Points, by their creative actions.

Training Beings to speak a Creator’s “language” is facilitated through simple UX/UI lo-code tools, audio (and in the future image) recognition and the understanding that topics and backstories that form the essence of their Being’s personality.

Machine learning studies not just the Being conversations (making them smarter over time), but also how Creators utilize the tools. Each Being has its own “Knowledge Graph” which grows more semantically rich over time and is utilized to make the Being appear to “understand the context of the Conversation.”

Newbie Creators will be guided through the onboarding process by “meta-agents” and we “eat-our-own-dogfood” by utilizing Beings to teach Creators about how to create Beings.

Conclusion is a new kind of creativity tool, coming soon to the Apple App and Google Play Stores – near you. Take your Instagram story and turn her into a Storytelling Being! As Director was to multimedia, so will be to AI.

Mark Anderson

Augmentation Of Text In The Post-Print Era: Casting Off The Drogue Of Paper Paradigms

Although much of our text is created and consumed entirely in a digital context, the legacy of the print era still lies heavily upon us. It is over 50 years since Ted Nelson and Doug Engelbart were first writing about digital hypertext. Yet many of our documents today are essentially still no more than digital facsimiles of paper. Surely we can do more? That is not a zero-sum proposal: a simple linear long-form text is perfectly suitable for some textual use but past limitations should not bind our future.

Happily, in the margins of the hypertext and knowledge communities there has been quiet progress, largely ignored, looking at ways to map and visualise some of the less obvious structure in the text we use to record our knowledge. A societal loss is that these tools are less widely known and explored, whilst most of us simply type on to screen-based ’paper’ on our screens.

We have the techniques for fine addressability proposed by Engelbart and methods to link discrete texts—or sections therein—albeit not quite to Nelson’s standards. But whilst we are free to allow multiple trails, as posited by Vannevar Bush, it is something still ignored by education and academe.

We may argue the relative importance of text versus less traditional visualisations based upon the text. However, experience of using tools like Tinderbox, Storyspace and, more recently, Liquid Author suggest such considerations are moot. The real benefit to the author is to engage with many different display views, and all the more so if writing to inform others rather than writing for oneself.

Understanding and using text in this new, intertwingled, perspective is something we should work towards. Observe any large organisation (perhaps outside the software domain) and it is embarrassing to see how dated mainstream ‘office’ tools are. Underneath accreted layers of marginal gain they remain just a writing pad, an arithmetical pad and a drawing pad. The affordances of such office tools are meagre compared to my normal working desktop environment. The less we see and experience more powerful text tools, the more we need training even to make initial progress. No wonder large organisations struggle to preserve knowledge.

Large, unstructured, wiki-type hypertexts can be so much more than a bunch of discrete articles. Missing from such environments are the hypertextual tools to help plan and envision trails through the corpus whilst maintaining a coherent view of the intertwingled whole. Where am I? How do I get from here to there? How did I get here from there? Why aren’t this and that linked? More importantly, as writing in hypertextual environments has taught me: what is the optimal node size to create in order to support multiple trails that can be read with reasonable sequential coherence? Supporting these sort of authoring needs is something we need to teach to generations already living in the post-print, hypertextual, world; not everything we need to share and preserve can be written in a social network app posting.

Mark Baker

The Best Future For Text Requires A Change In Author’s Culture And Tooling

What is the Future Of Text? I’m as good a prognosticator as I am a mountaineer or a brain surgeon. (That is, not at all.) But the Future Of Text surely depends on the character of text. What it has been, and what it is now, are surely what it will be. So, what is the character of text?

First, text is language. As such it is fundamentally different from video, for example. Text and video both create an experience, but the experience of video, like the experience of life in general, is addressed to the senses. Language bypasses the senses and is addressed directly to memory. Sensory experience ends up in memory, of course. And sensory experience can call forth memories, sometimes in very powerful ways. But that effect depends on having the actual experience again, or an analogous experience. Language, by contrast, can call forth memories on demand. That ability, to call forth memory on demand, using words alone, is what makes language both the great civilizing force that it is and, rightly used, the basis for the most subtle, powerful, and portable of the arts.

This is not to say that the ability to create sense experience artificially through pictures and movies is not also powerful. But its ability to call forth memory simply does not match that of language. It can sometimes achieve depth, but it fundamentally lacks breadth. The book is always better than the movie.

Language sits preeminent as the most profound and agile means of appealing to memory, and its pre-eminence is not a matter of technology but of the nature of the human mind and its sensory capabilities. Short of a Nurnburg Funnell, there is no technological threat to the pre-eminence of language.

Text is the vessel of language. It makes language navigable. It allows you to explore the minds of hundreds of people who you could never assemble in a room for your edification (not least because so many of them are dead). Even if you could get them all there, though, it would be harder to explore their thought than if you read their books. They would get distracted and argue with each other. (As Bill and Ted demonstrated.) They would not understand the precise nature of your interests. Sometimes, certainly, it is a gift to sit and talk with a particular mentor. But text makes the whole world your mentor and gives you the freedom to roam at will through the memory of the race. And for this purpose, text is infinitely more agile than recorded sound, which permits no skipping forward, no pause for reflection, no sideways glance to follow something up.

Text sits preeminent as the most profound and most agile means of enabling the exploration of the collective mind and memory of humanity. Its pre-eminence comes from its status as the vessel of language. It inherits from language all of its preeminent ability to appeal to memory. Its position, then, is not threatened by any communication technology that is not based on language, and I am aware of no technology that promises to replace text as the vessel of language, nor any reason to want one. The Future Of Text, therefore, seems assured.

But if the Future Of Text is assured because of its status as the best means of freely exploring the collective memory of humanity, there is a contradiction in that text is a linear medium. You can choose to survey a picture from top to bottom, left to right, or in a spiral from the center if you wish. You can only read text in one direction. The author of a text dictates the order of the experience. Where then is the reader’s freedom to explore?

This contest between the linear nature of the medium and the non-linear nature of the reader’s desire to explore is fundamental to the design and use of texts. It is why books have indexes. It is why the web has links and search engines. It is an area in which technology can make a profound difference to the Future Of Text. How long is the optimal linear reading experience, and how efficiently and effectively can the reader move to a new text to continue exploration on their own terms? The Web has created a revolution on both fronts, creating an environment in which Every Page is Page One. The immediate Future Of Text lies in part in refining our understanding of the rhetoric of an Every Page is Page One environment.

But both the tools and culture of text creation lag grievously behind. Authors still want to create and control a linear reading experience. Content strategy seeks largely to take back the power that the Web has given to readers to choose their own path. And since authors have not shown much interest in changing their ways, those who create tools for authors have not made great strides in supporting the creation of non-linear text collections. The short-to-medium term Future Of Text, I fear, may continue to be a struggle between writers seeking control and readers seeking freedom.

If the future should include a rapprochement, however, I think two elements will be essential. The first is a more robust approach to linking texts to each other, particularly in dynamic environments where texts are being added, removed, and edited continuously. The second is a tighter control of rhetoric to make sure that when a reader follows a link or finds content via search, that that content does exactly and completely what it promised to do. That requires more formalization of rhetoric than is practiced today.

These two elements go very much hand in hand. Without a more robust control over rhetoric, we can’t reliably automate content discovery and linking, both of which depend on every piece of content doing the job it is supposed to do in a consistent way. Without better content discovery and linking, even the best content fails for want of being found by the right person at the right time.

The Future Of Text is not in doubt, but the changes in culture and tooling required to give it its best possible future are, alas, still very much in doubt.

Mark Bernstein

The Future Of Writing Lies...

Martin Kemp

AI And The Arts – Human Texts In The Future

My question is, will my texts – interpretative essays on art and science in their historical contexts - be written in future by thinking and feeling devices? I will tackle this from an oblique direction, by looking at what is a rather developed aspect in the field of computers generating “art”, namely the composing of music “by J.S. Bach”.

I find it unsurprising that computers can generate compositions that attuned listeners cannot readily distinguish from the actual compositions of Bach. It is no more surprising than that computers can beat humans at chess. This does not mean of course that the face-to-face combat of masters playing chess loses its appeal. The 2018 World Chess Championship in London (Carsten vs Caruana) was massively attended and the tension in in the main hall was electric. We do not, after all, loose interest in the 100 meters at the Olympic Games because a cheetah can run faster than any man or woman. This does not however answer the main issue with computer-generated Bach, since the arts cannot be measured in the same way as athletic achievement or victory at Chess.

If I were a musicologist, I could look at Bach’s sacred music in the context of his employment as Kantor at St. Thomas’s Church in Leipzig. I could compare what he is doing in this Lutheran context with what Vivaldi was doing in Catholic Venice at the same time. I could look at text and setting in terms of how Bach navigated with inventive brilliance around certain kind of musical strictures that did not apply to Vivaldi.

Let’s accept that a computer can produce a piece of choral music with all the characteristics of a Bach Lutheran composition (and that a Vivaldi Catholic piece could also be computer-generated). I could still potentially produce a cogent analysis based upon computer versions – though the striking the exact relationship between the Protestant interpretation of the text and the nature of the music might be hard for a computer.

But – and this is the key point – the computer cannot effectively assume the human, experiential role of Bach in Leipzig, embracing his background, his lived life (organic and spiritual), his health, his family necessities, the demands of the church authorities, the acoustics of the church and nature of the organ, the size and quality if his choir, the nature and presence of the congregation at the time of its first and subsequent performances, his interactions with fellow musicians and composers, and so on according to the almost limitless contexts and contingencies that provide the chaotically complex platform for his compositions.

Do these contingencies matter if a computer can produce a comparison that is ostensibly indistinguishable from as those produced by Bach? They matter absolutely in terms of the actual and personal generation of the individual piece in real time and in real place. The computer version is irredeemably imitative of the final product that emerged from these contingencies, on the basis of those aspects of a composition that say “Bach” rather than Buxtehude. The computer has not invented the St Matthew Passion or the St. John Passion. Above all it has not devised from scratch the passionate content or the nature of the style that says “Bach”. (Incidentally, science generally is adept at handling isolated formal characteristics in art but is hobbled by content.)

Where do I as a historical essayist stand in this? My job is to seek out cogent factors in the chaotic order and disorder that marks the emergence of the compositions and to refresh our perceptions in a way that I hope will provide insights and open up new sensory experiences. It is a job of human communication, with all the plusses and minuses that this brings. I can look at Justin Gatland winning the World Championship 100 meters, and think it is disgusting that a known drugs cheat is allowed do so. I know that a cheetah would beat him. So would a driverless car. But is it the messy human dimension that matters in my business. That mess is the result of decades of my unique experiences and inherent characteristics. From that mess “art” and criticism emerges. Even in an AI world, this human dimension is what will matter for all of us.

Martin Tiefenthaler

tl;dr vs. ts;nec

[Too Long; Didn’t Read Vs. Too Short; Not Enough Content]

A text may, should, must and indeed will be long in order to qualify as text. The only useful texts, whether printed or on-screen, are long texts. This last sentence itself would need to be elaborated over several pages in order to be fully understood by anyone who has not already had the same or a similar thought. In other words, content that is worth reading and spending one’s precious time on needs to be of a certain minimum length if it’s to be sufficiently discursive and enable the reader to follow a train of thought rather than just stumble over fragments of thought.

Anyone who finds this assertion new or alien to their way of thinking should be given the benefit of several paragraphs or even pages elaborating on the idea in further detail, rephrasing concepts for the sake of clarity and providing greater depth with extensive argumentation and illustrative examples. Whether in print or on-screen, the explanation should be as readable as possible so that the reader is not distracted but can devote their entire attention to reflection and judgement. Only then is reading truly a tool for the assimilation and communication of cultural knowledge.

That said, it is relatively but not entirely immaterial how and in what form a tweet, advertising slogan or Facebook entry is composed. Such short texts offer content in a shortened form, at most transmitting “info”, but not enabling knowledge to be acquired discursively. Wherever writing is not a medium of analysis, criticism, debate and counter-argument, it remains incidental and interchangeable, at best decorative.

Short texts can only be said to have real content if they’re embedded in the context of other texts or adjacent to them, whether in a complete work, in the form of thesis and counter-thesis or argument and counter-argument, or in any form of complex interlinked text, as in bibliographies or hypertext. To have any use at all, a tweet must contain at least one link leading to further content. Otherwise, it is nothing but an emotionally dubious advertising slogan, whatever the pearls of wisdom, sensitivities, political prejudices or fashion items being promoted.

And because this text is itself only one page long, it contradicts its own argument: it is exposed to all sorts of misunderstandings that can be dispelled only by following up with further texts explaining the different possible interpretations, to which, in turn, properly nuanced responses can be made.

This presupposes a certain style of interaction when writing and commenting on one another’s ideas, an approach where each is aware of the potential for misunderstanding and therefore the need to cultivate a tone that has little in common with that of so-called “social” media or pure polemic. Indeed, it is only in short texts that a harsh tone with destructive psychological and political impact can appear acceptable and escape being exposed as destructive and toxic.

Logically, therefore, texts that are bad, false and empty of content should be required to be fully fleshed out in order to show just how bad, false and empty they are. With short texts, this can easily be overlooked; because of their conciseness, deficiencies can be glossed over at every level. This is obvious in the case of, say, proverbs, bumper-sticker aphorisms and Facebook entries, which, on closer scrutiny, cannot live up to their promise of truth or depth. Contrast these with texts that are sufficiently detailed, humanistic, enlightened (areligious and anti-hierarchical), emancipatory, and committed to grass-roots democracy and the common good, which through their discursive depth and breadth of thought seek to provide a counterbalance to the short-form trivia machine.

Texts have to be long in order to be fully explicit, comprehensible and unambiguous. Never before has there been so much text and never before has there been so much material available to read. Nevertheless, most text and therefore most reading behaviour has shifted to an extreme of brevity that might appear to be highly efficient, but is actually detrimental to in-depth discourse and damaging to thinking beings. A text may, should, must and indeed will be long in order to qualify as text.

Maryanne Wolf

Deep Reading, Democracy, And The Future Of Text

Human beings are analogy-makers and story-tellers. We make great, protean leaps of imagination and discovery when we draw analogies between what we know and what we don’t. Books — from Margaret Wise Brown’s Goodnight Moon to Stephen Hawkings’ Brief History of Time — represent the single most important source of our ability to make increasingly sophisticated analogies and insights over a lifetime. For example, to this moment when confronted with new situations or concepts, I find myself turning to epiphanies from beloved novels, allusions in poems, facts from articles. We are the stuff of remembered text, the foundation for our intellectual development and society’s progress. 

What would happen, however, if we began to lose our relationship to text as the basis for analogy, insight, and reflection? There is much we can learn from how human brains came to read. Reading is an invention, not a genetic endowment. The brain had to build a new circuit for this new function. But, unlike older circuits, the reading circuit is plastic and reflects whatever medium and language is being read. Further, the circuit can be very basic or become elaborated over time. Deep reading involves connecting basic decoding processes to analogical thinking, critical analysis, and empathy — sophisticated functions which require extra time to process.

Therein lies an unanticipated rub. As our species reads ever more on newer, digital mediums, rather than printed text, reading has changed. Digital mediums advantage fast processing and multi-tasking, the converse of deep reading requirements. The more we read on screens, the more likely we skim, browse, and literally “short-circuit” time-consuming critical analysis and reflection. Digital reading doesn’t preclude deep reading, but disadvantages it.

My concerns for the Future Of Text flow from these medium-based differences. Increasing research demonstrates that we give less time for processes like critical thinking, inference, and worrisomely, the evaluation of truth. When given the same content to read on screens or in hard copy, young adults comprehend the material better in print, despite believing that they are better on the screen, because they read “faster”.

What would happen if the reading brain begins to lose such essential intellectual and empathic capacities? Digital technologies contribute profoundly to societal progress, yet their use also contributes to rapid manipulation by those who would raise false fears, hopes, and blatant untruths. Deep, critical reading--- that immerses us and our children in the lives, thoughts, and feelings of others ---is an antidote to manipulation. Deep textual reading must be preserved if democratic societies are to survive and flourish.

If I remain optimistic about the Future Of Text, it is because, like the philosopher Charles Taylor, I believe that the inherently generative nature at the core of language will continue to sustain and propel human knowledge---and in so doing, give us ever fresh reason to preserve text in all its present and future forms.

Matt Mullenweg

Digital Text Gave Us Radical Power. Agency And Dignity Should Be Next

To spread the word is to exercise power: that’s why for centuries, governments have so tightly regulated the ability to disseminate texts (in many places they still do, of course). Monarchs no longer tell people which words they may or may not use and which texts they may or may not read, but we still operate within a complex network of constraints: legal, technical, social. From wax tablets to pages to screens, we love to consume words on flat, even surfaces, but their path to smoothness is still full of friction and pressure.

It takes time and distance to recognize that some recent developments we (alright, I) wanted to think of as disruptive were, more likely, moments of historical iteration. The age of techno-utopianism — emerging earlier this century with social networks like Facebook and publishing platforms like the one I co-founded, WordPress — promised to fundamentally change people’s ability to broadcast their thoughts and ideas, from the lofty to the pedestrian. In many ways it did: leave your smartphone at home for a day and try to accomplish, well, anything. But not unlike with print before it, we’re still coming to terms with the meaning of our radical power to share, connect, and absorb so much so easily. Both as individuals and as collective entities (incorporated or not) we don’t always catch our mistakes before their effects become all too real.

Is it disappointing that the failures of the past (in some cases, the distant past) keep haunting us, and that we so often fail to transcend our worst pre-digital instincts? Sure. Especially in those cases — and there are far too many of them — where the freedom we’ve carved out for ourselves, one line of code at a time, facilitates the harassment and targeting of the most vulnerable and marginalized in our societies. When we stop imagining this moment as wholly new, however, and find our place within a continuous history of text and textual interaction, we also open up the space to learn and to imagine what our future might look like. Does it entail the ability to forget and to be forgotten online? Can new combinations of human thought, computational power, and social practices create digital spaces that better protect us from violence and manipulation? Might we open-source our way to a world where access to disseminate our words remains free, but abusing that access isn’t free of consequence — at least less so than it is today?

There’s still room for beauty, wit, and kindness on the web. I sincerely believe it, and witness it every day. That’s not going to change, at least not for those of us with the privilege to make choices about how and what we consume. My challenge — our challenge — is to make it possible for anyone to create, to interact, and to find community without having to renounce our agency, let alone our dignity, along the way. It’s to enable a feeling of freedom, however mediated or pixelated, that doesn’t require us to lock ourselves in increasingly hermetic walled gardens. Escaping them was the whole point of the past few decades of the internet.

Michael Joyce

The Future Of Text As A Two-Level Impurity Lies In Its Past

“Print stays itself, electronic text replaces itself,”1 I wrote in 1996 as something of a secular koan, the second half of which employs an adjective with an inherent spark of quantum materiality rather than the cool detachment of the adjective, digital, which not long thereafter became canonical, despite its etymological embodied root. Written for Geoffrey Nunberg’s edited volume The Future of the Book, it was republished in my own collection of essays Othermindedness: The Emerge nce of Network Culture in 2001 just before I fled the flapping flags mounted in lieu of machine guns on the pick-ups of homegrown proto-fascists prowling post-September eleventh Dutchess County, NY, in search of imaginary Al Qaeda operatives. I fled first to Italy in the hills of the Marche at the home of my Italian translator and then at the turn of the year to Berlin, where by then they knew better about fascism, at least for the interim. At Humboldt University where I was a visiting fellow, my host kept promising that he would arrange a meeting with his friend and colleague Frederick Kittler, who was purportedly working day and night on a history of actual digital signification and symbolic communication tracing the history of how physical digits served to code nonverbal communications (as in “give him the” for instance) and complex relations (as in the indexical index finger), and even embodied computational ones (as in the trancelike and near instantaneous manipulations— etymological hand signal <from PIE root *man- (2) “hand”> goes here— of abacus grand masters).

Looking now at the email from Frode Hegland, whose subject line reads “The Future Of Text : A 2020 Vision [invitation to contribute],” I am not unaware of the irony of summoning the title of Geoff Nunberg’s book at the start here, or that to all appearances I may seem not to have addressed the current question as yet, although in fact I think I have, skilfully if not pointedly (or all that humbly), evoking recent historical parallels which allude to how a new generation of four-lettered, red-hatted pick-up truck proto-fascists prowl and preen en route to stadia where their leader summons their base instincts whilst breaking records set by (but not recorded by) Elton John.

But the truth is it doesn’t matter. But the truth doesn’t matter. But the truth is a process of the kind of “intra-activity of mattering” that quantum physicist, philosopher, and queer theorist Karen Barad calls “making leaps here and there, or rather, making here and there from leaps, shifting familiarly patterned practices.” 2 And that, however paradoxically, puts text, and its future, in a good place I think. If the große Lüge has suited itself up as fake news and alternative facts in a clownsuit of hashmarks that a große Lüger kicks through like a harlequinade of fallen leaves, there is still the alternation between fixed and variably recurring instances of textuality (for which neither print nor electronic/digital may be the best terms) whose instantaneously alternating charges can effect a quantum retrocausality in which the future may determine the past, including that past which we mark by calling it our (constantly replaced) (and apparently parenthetically—and peritextually— overdetermined) present. It is something quite like this that a group of Russian, Swiss, and American physicists recently exploring “protocols for circumventing the irreversibility of time” set out to perform in the particular kind of recurring textual/mathematical replacement that constitutes the “complex conjugation” of a quantum algorithm, which in this case enables the researchers “to experimentally demonstrate a backward time dynamics for an electron scattered on a two-level impurity.”3 IOW performing an #instagrammatic, imagotextual memesis [sic, but vide the manga {~MAGA}, aka idem] however briefly reversing time’s (and txt’s) inevitable TikTok.


  1. (Re)Placing the Author: “A Book in the Ruins,” in The Future of the Book, Geoffrey Nunberg, ed., University of California Press (1996).
  2. “On Touching— The Inhuman That Therefore I Am, Karen Barad, Differences, Volume 23, Number 3 (2012) See also
  3. Arrow of time and its reversal on the IBM quantum computer, G. B. Lesovik, I. A. Sadovskyy, M. V. Suslov, A. V. Lebedev & V. M. Vinokur, Scientific Reports volume 9, Article number: 4396 (2019)

Michele Canzi

Lakes, Ponds, Rivers And Rains: History And Future Of Text

Stock and flow, masses and niches

In many regards, the history of written words is a story of unbundling. Wordpress has replaced traditional journalism. Twitter has replaced posts and comments. Medium has replaced the old school blogs. Substack has replaced blog aggregators. Roam (more to come later) is supplanting Evernote note-taking. None of this is a bad thing per se: new swaths of the world population have now access to more immediate ways to express and consume ideas thanks to the frictionless architectures of evolving services.

I think ‘stock and flow’ is a useful construct to describe today’s textual media. The idea is relatively straightforward: there are two kinds of quantities in the world — stock is a static value: money in the bank or trees in the forest. Flow is a rate of change: fifteen dollars an hour or three thousand toothpicks a day. Flow is the Facebook feed, the comments and the tweets. It’s a stream of daily (sometimes hourly) updates that people lean into to remind others they exist. Stock instead, is the enduring stuff. It’s the content you produce that’s as interesting in two months as it is today. It’s what people look for when they type some keywords in a search bar. It spreads slowly but surely, building influence over time.

Keep in mind that the vast majority of text we consume today is on the internet, so the business model is a critical lens to understand written content. The idea is that the economics of the Internet work for two types of businesses: massive publications that can take advantage of the Internet’s scale to reach a huge number of people very cheaply and efficiently, and niche businesses that can take advantage of the super low costs of acquisition to reach a very narrow niche of people all over the world. Hence, there are only two ways to play the game: either you make a tiny bit of money from a lot of people (traditionally, through advertising) or you make a lot of money from a handful of folks. This generally entails subscriptions.

Of lakes, ponds, rivers and rains

Lakes (stock, scale) — ‘Lakes’ are the bread and butter of internet based long form content, like journalism. The web provides three huge advantages over newspapers. Distribution is completely free and hosting is extremely cheap. Everyone with an Internet connection who can read the The Wall Street Journal has access to this form of textual media. The potential reach of every piece of information is equal to the total addressable markets, thanks to social networks and email. Text has a beginning and an end, it’s monodirectional (from an author or group of authors to the masses). Though with many exceptions, advertising is the main monetization lever for lakes.

Ponds (stock, niche) — A bunch of WSJ writers have Substack newsletters on the side. They spend most of their time swimming in lakes, yet they enjoy dipping their toes in ponds too. Why? Because smaller bodies of water are cozier, more intimate, and familiar. What we have lost in this age of mass textual communication is curated content produced and delivered by many creatives, from all kinds of different backgrounds, to pursue the kind of work they love at their own terms, to cater the needs of smaller audiences. This is the realm of smaller crowds that jump paywalls to express their loyalty to thematic authors.

Rivers (flow, scale) — Twitter threads are the perfect example of raw, unstructured ideas. Marc Andreessen originally invented the threads as a workaround to the 140 character limit. Threads are a very powerful medium of communication. They are collaborative, permanently evolving running streams of thought. There is no clearly identified business model here, because there’s no real product (yet). Threads are catalysts of wild, non-performative thoughts and entry barriers are particularly low for masses, which makes them so interesting to me. Storms of tweeters flock to threads to nurture, edit and evolve their thinking in real-time, in what looks a lot like a conversation among friends (or foes).

Rains (flow, niche) — Rains are a flow of ephemeral, dispersive, hybrid ideas expressed through text. They are virtual spaces to write, structure, organize, bundle and destroy. And the best part about it is that you don’t have to think consciously about it: it gets compiled down to finger-tip skill as you use it. Though we are still in the early days of exploring these new ways of combining infant thoughts and partial ideas into adult reasonings, many (myself included) believe Roam is the best positioned product to realize a complex vision of hypertext as theorized in the 60s by Ted Nelson. Instead of blindly cramming new information inside your head, Roam encourages you to literally connect the dots — to create your own links and metaknowledge, thus increasing both your memory and understanding of new information. Good ideas are sexual beings that mate with historical facts, analogies, theories and direct experience, to produce new ones. Text is just the matchmaker.

What’s truly interesting is that the history of the web is punctuated by a series of products following an evolutionary path from the whole to its atomic unit. Newsletters emerged from old style blogs and online newspapers. Tweetstorms are descendants of forums. The successful products took big meals and converted them into snacks. Ultimately, we like simple, focused products that enable frictionless behaviour to compound over time through intelligent linking and frequent engagement.

The Future Of Text will be a broad form of decentralized, unstructured repository of nuggets of information with the potential to compound over time and spun into larger new ideas.

The Text Map

Michael Nutley

The Future Of Text

I’m a journalist. It’s been my job for 35 years, and the overwhelming bulk of my output has been in the form of text; initially in print and more recently online.

For that reason, and because I’ve been writing about online media and marketing for the last 20 years, my look at the Future Of Text is going to be from the perspective of the publishing industry.

One of the main reasons why we’re asking questions about the Future Of Text is that, with the explosion of bandwidth, the internet has moved from an experience based on text to one based on video. Last year Cisco predicted that video will account for four-fifths of total global IP traffic by 2022, up from three-quarters in 2017[1]. I’m repeatedly told that people would sooner watch a video on a topic than read about it, whether that topic is fixing a hole in plasterboard or which marketing automation software to buy for your company.

Hence the concern. Has text been superseded? Is video a better way of communicating? Should we all sell our QWERTY keyboards and buy voice recorders and editing software instead?

There’s a number of things to unpick from this. The first is the idea that people prefer video to text. This is meaningless unless we know what each person is trying to achieve. Video is great for explaining some things; plastering, for example. It’s much less great for complex, detailed ideas.

This is related to the fad for “snacking” content that dominated the publishing world in the early part of the 2010s. People, we were told, only wanted content in easily consumable snippets; hence Buzzfeed and its many imitators. This philosophy spread almost everywhere. All content had to be made shorter, simpler, easier to consume. At the same time, near-ubiquitous wifi and the growing power of the smartphone made it possible to watch video on the go, something that had never been possible before. But because people read faster than they speak, the emphasis on short content combined with the growth of video meant the amount of information being conveyed in each piece of content dropped still further.

This emphasis on video content was also driven by the new economics of publishing. As the business model for publishers switched from a mixture of advertising and consumer payment to just advertising, the pressure to make that advertising generate more revenue increased. The premium for video ads was higher than for standard banners and buttons, so publishers sourced more video to place advertising around. The overall result was a significant contribution to the dumbing down of public discourse.

However, none of these trends turn out to pose an existential threat to text. The question of shorter and shorter content was based on a misapprehension. Certainly, many people wanted short snippets of content from a newspaper they read on their commute. But that didn’t mean everyone wanted short snippets of content in every situation. The amount of information you want – and therefore the length of article you’re prepared to read to get that information – will vary from topic to topic, from situation to situation. Even the idea that people’s attention spans were getting shorter has turned out not to be true[2].

Meanwhile the appetite for long content, and the willingness to pay for it, is reappearing, at least at the top end of the market. The success of subscription models at The Economist and the New York Times show that there are times when people want to read long, detailed articles, and that they’ll pay to do so.

And the thing is, we’ve always known about the limitations of text, and augmented it where necessary. We were adding pictures, maps and diagrams to books before printing was invented. And what are footnotes if not an early form of hypertext link?

What’s more, text isn’t important as just an end-product. The overwhelming majority of videos, podcasts and voice interactions involve text at some point in their creation.

There’s an internet cliché that no form of communication has replaced another, that each new arrival just reshuffles all the others into new roles, niches etc. It’s not strictly true – carrier pigeons and the heliograph are long gone – but text has survived centuries of technological change. In that sense it’s like the bicycle or the electric guitar. The fundamentals were established early on, and any subsequent changes are tweaks to the basic formula.

Even the currently fashionable contention that voice interfaces will take over how we interact with our computers – because voice is a more natural form of communication – seems questionable to me. I’m inclined to follow Willy Wonka’s view, that: “If God had meant us to walk, he wouldn’t have invented roller skates”[3].

It comes back to use. A casual question about the weather is easiest voiced; one where the answer is more complex and detailed, and to which you may want to refer back is probably best answered in text form, with all the necessary augmentations. There are age-related preferences – I hate recipe videos, but my kids use them all the time – but generally people choose the format that works best for the content they want at any one time.

Is this article best presented as text? Or as a video? Or would it be better as an audio file so Alexa could read it to you?

The choice, I suggest, should be yours.


  3. Charlie and the Chocolate Factory, by Roald Dahl

Mike Zender

The Future Of Text

Americans no longer talk to each other, they entertain each other. They do not exchange ideas, they exchange images. They do not argue with propositions; they argue with good looks, celebrities and commercials.

— Neil Postman Amusing Ourselves to Death: Public Discourse in the Age of Show Business

As I write this we stand at a moment of textual history following eons typified by orality then a shorter era dominated by literacy in an electronic age characterized by massive information edited into tiny texts. These bumper-sticker-like texts eschew both oral-oriented narrative forms with their engaging and wisdom-embodying characters and plots, and literate-oriented reasoned arguments drawing upon data and supported by citations of previous findings. In place of these two historic traditions, today’s texts are formed in an impossibly massive world of data that, because of its sheer volume, is being reduced and edited into Twitter bites and Facebook posts that are in turn shaped by algorithm-driven document scans and Google searches.

Marshall McLuhan convinced us that the form governs the content, “The medium is the message.” More recently, Neil Postman observed that each media form preferences certain kinds of content saying that you cannot do philosophy using smoke signals, “Its form excludes the content.” What kinds of content does the current text form support? Can this new form of text connect us, inform us, make us wise? Postman’s quote at the head of this article certainly opens the question.

If today’s textual forms can make us wiser it is certain they will operate within established measures of communication because human communication capacities, unlike technologies, have not changed. The principles for analyzing texts, hermeneutics (the science of interpretation), can be used to analyze today’s textual forms. If today’s text correlates well with hermeneutic principles, they might also be expected to support the development of knowledge and wisdom; if they do not correlate, they will be suspect.

One fundamental principle of hermeneutics is context. If you ask what the word “bow” means the only possible answer is that “it depends on the context.” If “bow” is in the context of “arrow” it means one thing, while in the context of “ship” it means another. Text originally meant “woven together.” The first level of contextual analysis is the word and phrase, the threads of the text. The meanings of these individual threads can be determined lexigraphically – individual words in the context of their historic meanings - while attending to surrounding verbal context to determine the correct meaning. Another critical dimension of context are the stories and texts that preceded the text’s author. What did the author know, what was in her mind, what knowledge guided her as she composed her words into a text? These are questions of history which must be consulted before meaning-making is complete. For sake of this short summary, a final critical context is the world of the text’s consumer. What are the concerns and questions, what worldview preoccupies the reader of the text? In what communities, what scope of social interaction, is the reader engaged? Certainly, the reader governs the interpretation of the text and readers live in a context that guides meaning-making. All texts are woven together by and in these contexts (a compound con = with + text = woven) and this weaving together of one word, history, and community with another can be built into knowledge and knowledge applied to and used in reality can produce wisdom for those who listen to reality’s answers.

How do three characteristics of current textual forms, massive volume, short length and scanning/searching, stand up to hermeneutical contexts? Clearly, sheer volume dictates an overview approach. Overview text is most often a short synopsis whether that is the summary from a search engine, a twitter feed, or a blog post. In each case the text is severed from its context, absent historic or authorial background. Sheer volume not only of data but of human participation also necessarily limits the number of communities in which a reader can participate. Because people tend to associate with like-minded others todays texts often occur in an echo-chamber environment where a single message resonated and amplifies. In the emerging form of text, brevity limits verbal context, disconnection from history limits perspective, and segregation of community narrows association, and none of these support rich hermeneutics.

Finite people such as we envision the future by extending trends of the present. Given this current form of text, what kind of future might it evoke? Short context textual forms and their artificial intelligent algorithms seem to be supporting a deep editing of facts isolate from history and other people with different ideas. With all our data, our texts exist as a narrowing of contexts. Based on current social trends supported by narrow texts, the future looks dark. Public discourse has seldom been worse.

Within current trends there is also a glimmer of light. Short texts shared within the narrow context of friends and family are supporting personal interaction and connection that facilitates expressions of love and concern and support in face of life’s trials. But even here the sort form truncates communication to the short and essential, leaving much unsaid and left out. I believe it was Ernest Hemingway who attempted to write the best short story: “baby shoes for sale, never used.” This story snip proves that it is possible to have short a text that communicates profoundly and can evoke wisdom. It is possible. But Hemingway was a great writer.

Based on this very brief reflection, thus far new media largely fail the hermeneutics test. Verbal context is narrow and clear only because the broader historic and social contexts have been severely narrowed to accommodate overwhelming volume. Short-form digitally sorted texts seem to be primarily making us less wise and more angry. At this moment in the development of cyber-based forms of textual discourse, it is not possible to know whether tapestries woven from short bursts in small communities can support the development of wisdom or will simply destroy it. Only the future will truly show what sorts of plants spring from these textual seeds.

Naomi S. Baron

Picture This: Could Emoji Replace Writing?

Written language is a story of multiple inventions. Typically, historical progression has been from stylized pictorial representation to abstract rendering. What looked like a fish in early cuneiform became lines and wedges in New Babylonian. An early Semitic ox head became the letter A. Sometimes textual representation has not sufficed. Graphic images – some abstract, some realistic – might be incorporated as well. Handwritten manuscripts were sometimes illuminated, and printed books (and then newspapers) came to include drawings and photographs.

How about graphic add-on’s in digital writing? What do they look like, what meaning do they express, and can they complement – even substitute for – written script?

Scott Fahlman introduced the smiley to mark an online message as a joke. Soon cascades of other emoticons followed in the West (along with kaomoji in Japan). Think of emoticons and kaomoji as that early cuneiform fish: a schematic drawing representing a general concept and therefore open to contextual specification, like much of spoken and written language.

Then came emoji – literally ‘picture’ (Japanese e) plus ‘written character’ (Japanese moji). The original group of 176 were visually generic, like emoticons. When the Unicode Consortium began encoding emoji, the list was barely 700. By mid-2019 there were 3019 – plus the emoji people create themselves.

Besides the numerical explosion, something else has been changing: a shift from generic to specific graphic representation. Instead of one emoji for ‘tree’, we now have ‘deciduous tree’, ‘evergreen tree’, ‘Christmas tree’, and ‘palm tree’. Personal life experience and identity are also engendering a host of new Unicode entries: a drop of blood for ‘menstruation’; three different emoji for ‘blond-haired person’ – each with a different skin color. In our age of identity politics, netizens are lobbying to see themselves more precisely depicted.

If emoji are designed to supplement text by conveying our feelings (including about what we just wrote), are they succeeding? Research suggests that like beauty, their meaning is often only in the eye of the sender or recipient – yet not the same meaning for both. Misinterpretations occur within mobile carrier platforms (does that drop of blood mean ‘menstruation’ or ‘blood drive’?), across platforms (since the “same” emoji is rendered differently by various carriers), and across cultures (while Westerners look at mouths, Easterners focus on eyes).

The primary challenge with emoji is because they are increasingly pictorial, we assume their meaning will be transparent, when it often is not. Moreover, given the thousands of choices, we can spend buckets of time selecting the “right” one. But lastly, as with all graphic illustrations intended to carry some of the semantic load of our communique, relying on emojis reduces the incentive to express our message clearly in words. As Scott Fahlman confessed in a radio interview, if he took more care in writing online messages, he won’t need emoticons.

Online graphic images can feel emotionally empowering to the message sender and amusing to the recipient. But don’t count on emojis as failsafe expressions of meaning. No, they are not adequate substitutes for script.

Nasser Husain

The Future Of Text

I was reminded recently of Utah Phillips’ 1996 musical collaboration with Ani DiFranco, ‘The Past Didn’t Go Anywhere’. The example from the titular song that stuck with me was the idea that the speaker could leave the room, pick up a rock older than any of us, and drop it on the foot of his adversary. The rock in that example is not different, materially speaking, from language (at least the way that I have come to look at it, in any case).

The ‘future’ of text is a function of its past.

My students recently were discussing ‘textspeak’, and they were examining the odd kinds of marketing spam that they get via their mobile telephone numbers. One such campaign had, in its title, the word ‘NoReply’. Two things about this neologism struck me (old-fashioned and cranky man that I have become): first, the lack of space between the words in No and Reply, and secondly: the fact that the lack of space isn’t a problem for my students, at all.

The Future Of Text is colonial – it will insinuate itself into every space, violate even its own boundaries. What happens when there’s no room left on the page? Writing will be ‘overwriting’. Over and over, a cross-written letter on top of a palimpsest, on recycled paper, until the pages become an unintelligible blackness like Jean Keller’s Black Book. Plenitude morphs into void, and we’ll have to create the beginning/word all over again.

Or, as Craig Dworkin writes in ‘Zero Kerning’:

The semiotic system of language depends on its multiple articulations at different levels: those intervals between letters, words, and larger units of grammar which introduce the physical space of difference that permits us to distinguish, cognitively, different meanings. Moreover, as evinced by the move from the scriptura continua of western antiquity (in which texts were written without spacing between words), such intervals have had far-reaching conceptual effects, with changes in textual space changing the way we understand the world around us.

The Future Of Text is to understand that it will layer itself in/onto every available surface, and will squeeze out the ‘textual space’ in which we momentarily rest as we grope for ‘different meanings’. This condition need not be an apocalyptic confusion, but it will require a paradigm shift. An internet of things must be preceded by an encoded world, a reality augmented by a virtual and technologically visible layer of language.

Every room a story. These walls will talk, embedded with descendants of Alexa and Siri. And I hope it springs, fully-formed from the speaker of one of them.

Neil Jefferies

Paths Among The Stars - Reconstructing Narratives In A Distributed World

Approaches to digital textual representation and analysis are complicated by the fact that “text” is really the amalgamation of a number of different concepts of language and its symbolic representation. Even in its earliest forms, the apparently independent origins of writing in Mesopotamia, China and, with lesser impact, Mesoamerica1 led to divergent evolutionary paths. As a generalisation, Chinese characters2 became effectively a syllabic written dialect amongst multiple spoken dialects with strong echoes of its pictorial past, whereas to the West alphabets developed more abstract forms but representing a more literal serialisation of spoken sounds. In more recent history, a much broader variety of textual forms have emerged that are not rooted in speech: musical and mathematical notations, representations of sign languages, and of course programming languages, to name but a few. Unique to the digital world is the emergence of emoticons (such as emoji and kaomoji) as, at least in concept, a language independent set of symbols for communication.

The initial appearance of written language appears first in more fragmentary forms, limited by the available technology and literacy - annotations on images and objects; records and laws; myths and prayers, for example. As availability improved, longer, more discursive, forms became more widespread and subsequently came to form the basis of much intellectual discourse. A common problem with more fragmentary text forms is that understanding their meaning strongly depends on extrinsic and hard to find contextual information. In theory, longer forms should be able to be more complete in this respect3 – although the extent to which this is successful is open to question4. In practice, most texts can only be properly understood or analysed with reference to a broader, often implicit, linguistic and contextual framework. Each new form introduces a new area of domain-specific context required for interpretation, and in the digital world this context can now change at an alarming rate. A particular example would be emoji, many of which already acquired secondary, often highly time-and-culture specific meanings, quite distinct from their original intent5.

So how we can start to represent digitally the relationship between a text, or rather the elements of a text, their intended layout and/or rendering, and the broader network of resources that give them meaning? Publishing a separate, usually unlinked paper is still a common, and highly unsatisfactory practice. Embedding such information within a version of the text using technologies such as XML, TEI etc. can only capture a modest level of complexity (limited primarily by a single hierarchical view of textual structure6) and cannot account for ongoing contextual evolution. Such ‘snapshotting’ is conceptually rooted in the production of physical print or manuscript artefacts. A better approach might to consider stand-off markup7 or annotation8 which can accommodate a complex network of relationships to contextual entities (such as people, places and events9), allow multiple viewpoints and narratives to co-exist and permit evolution without needing to alter the original sources.

The advent of widespread digital communications technologies has seen a significant shift in textual content back towards a more fragmentary model. Discourse can be scattered over multiple blogs, emails and tweets, each corresponding to slightly different participants and audiences. Papers are increasingly co-written dynamically by multiple authors using tools such as Google Docs and published online (with embedded links to data, code, and images) with the ability to release new versions as a result of ongoing discussion10. A static publication is increasingly an afterthought or an administrative act to satisfy tenure requirements rather than an act of scholarly communication11. The challenge for memory organisations such as libraries and archives, is how to capture and preserve such a discourse, which is no longer a simple ‘thing’ that can, or will, be deposited. Here again, we need to deal with dynamic, graph-like networks of information which include and, in turn, give meaning to, texts and text fragments.

An initial step would be to define a standard way of citing texts and text fragments that is independent of the location, the file format used to store the text, and the language(s) of the text. In essence, this would need to define a basic coordinate system for locating a specific fragment within a larger body of text. This common basic substrate could then be used to construct higher level fragment specifiers based on, for example, textual form, a specific rendering (especially for digitised materials) or linguistic structure. Crucially, this allows both human- and machine-friendly approaches to text to co-exist and interoperate, essential in an increasingly digital world.


  1. Denise Schmandt-Besserat,‘The Evolution of Writing’, Accessed 9 October 2019.
  2. John DeFrancis, ‘The Chinese Language: Fact and Fantasy’, University of Hawai`i Press, 1984.
  3. For example, using footnotes
  4. Mike McRae, ‘Science’s ‘Replication Crisis’ Has Reached Even The Most Respectable Journals, Report Shows’, Science Alert, 2018.
  5. Diana Bruck, ‘25 Secret Second Meanings of These Popular Emojis’, BestLife, 2018.
  6. Andreas Witt, ‘Multiple Hierarchies: New Aspects of an Old Solution’, Proceedings of Extreme Markup Languages, 2004.
  7. An underused aspect of TEI (
  9. Howard Hotson and Thomas Wallnig (eds.), ‘Reassembling the Republic of Letters in the Digital Age’, Gottingen University Press, 2019.
  10. Jocelyn Kaiser, ‘Are preprints the future of biology? A survival guide for scientists’, Science, 2017.
  11. Phil Davies, ‘Journals Lose Citations to Preprint Servers’, The Scholarly Kitchen, 2018.

Niels Ole Finnemann

The Future Of Text In The Era Of Networked Digital Media

One way to look forward, maybe the only one, is to look back.

In late 20th century the notion of text changed due to a range of epistemological issues. The classic idea of the text as an expression of the authors intention, was replaced by the idea of the text as an intrinsic system of values, structures and themes. This “new criticism” was again criticized for ignoring the issues of intertextual references, social class and cultural context, ending up in the question as to whether there was a text in this class? The ‘work’ itself dissolved into individual interpretations. Yet, at the same time the notion of text was extended to include also images and videos and possibly other sign modalities. In spite of this the materialization of the text was still considered as a fixed sequence of letters manifested on paper (or papyrus or parchment) and assumed to be invariant and thus an insignificant precondition in accordance with the predominant philosophical idealism within the humanities.

Yet, one question was missing, namely what about electronic representations of texts? Should the electronic format be considered an external and non-signifying material component? a new type of extralinguistic sign modality on a par with images, videos, sounds etc.? a new independent sign modality? or should it be considered a container that might hold other sign modalities? More fundamentally the question was raised whether, which, and eventually how, material characteristics of digital text could be utilized as signifying components and therefore included in the notion of text.

With the spread of digital media, the material characteristics of texts became still more significant. Electronic text allowed for an ever-growing range of semiotic features. Conceptually the point of departure was taken in the print paradigm which to a certain degree still fits a range of new texts formats, such as E-books, PDF-files, E-mail, Word-processors and text editors and professional manuals for specifying how to encode stable digital editions of ‘print-like’ text. Even so the digital version includes material characteristics which make the digital edition different from any printed version.

The difference is a consequence that digitization always imply that both content and processing rules are manifested and processed as sequences of the two letters in the binary alphabet. Digitization is itself a new kind of textualization. Thus, we can speak of texts defined within a print paradigm of linguistic alphabets and texts defined within a digital paradigm based on the binary alphabet. Among the most fundamental differences are 1) that the writer and editor positions are closed when the text is printed, while these positions remain permanently open as options for digital materials because the closure is coded and thus editable, even if it can be made difficult to transcend. Thus, digitization changes on a very fundamental level the types of possible relations between writer, editor, reader. 2) while the interface of a printed text is fixed in the text, digital text is characterized by the separation between invisible text and an perceptible and editable interface which translate (and interpret) the binary sequences to human recognition and allow for the ever-ongoing editing of content and of the addresses of this content on the hard disk/server. 3) The electronic text is based on the mechanical level of bit processing which allows for conditioned and automated operations as well as for automatized search, editing, scripting and reading of the sequences. 4) While the printed text is delimited as a reading space by the fixation in time, electronic text always come with editable time dimensions. In principle each and any single bit or any sequence of bits can be ascribed its own time scale of variation and thus made a significant part of a message. The limit for the number of possible timescales is our human mental capacity.

The development of globally networked digital media since the 1990’s widened the reach of these material characteristics and added new material characteristics. Networked digital media allow for synchronized global real-time, eventually interactive communication of any sequence of bits. Networked digital media can not only be connected, they can also interfere with each other. Any machine can be accessed (and modified) from any other machine on the network because the hypertext links from one address to any other address on the network may include scripts with instructions to be performed at the destination. Timescales can be defined for any kind of data, they can be built into a system or they can be specified by an editor or reader. Among the most important utilizations we find what can be labelled as Multiple Source Knowledge Systems which combine data from deliberately chosen sources of all sorts, some of them eventually in real time, and presented in coordinated constellations. Such systems are today already found in finance, meteorology, climate research, incorporating all sorts of data, some in real-time, from all over the globe. Others, such as search engines and similar services and the range of social media platforms are well known from everyday life. They may differ due to a range of media characteristics (datatypes, link structures, timescales of data, timescales for the window of interaction etc.). They also differ, however, due to purpose, cultural values, subject matter, thematic focus and to multimodal formats including dynamic and interactive visualizations.

Multiple Source Knowledge Systems fit well to the 21. century characterized by increasing global interferences and interdependencies. They can be used to local, regional and global monitoring including real-time data. Many such systems are developed within the range of issues addressed in the UN Sustainable development goals. They may include data from scanning the outer space as well as the interior of our bodies and everything, such as culture, in between. As known from the history of print, we can never predict future genre developments. What we can do is to specify how the processes of digitization both include and transcend the print universe of text and how the new textual universe based on the binary alphabet fits the agenda of the Anthropocene in which culture and society is to be seen as agencies in nature, and human practices enter into a global scale and connect the issues of survival of culture and society with the survival of the biosphere.

However, these systems have also become part of the problem as they require huge energy and labor resources to do their job, and threatens the privacy of individuals as all digital processes leaves their binary traces and cannot be kept within any closure as in a library of books. Still, we have seen the rise of a few global information monopolies. The binary alphabet, finally, can function as (programmed) agency and worldwide. Thus, the Future Of Text will also be processed in a time-space full of tensions.

Nick Montfort

Free, Open Data And The Future Of Text Technologies

To shape the Future Of Text, we develop new text technologies. The development of the essentially text-based Internet and World Wide Web are examples; another is the character encoding scheme Unicode, originating 1987–1992. Unicode massively extends ASCII (American Standard Code for Information Interchange), of great significance but doggedly focused on the Latin alphabet. Unicode is for all contemporary writing systems, along with many historical systems and, of course, emoji. As with Utopian endeavors in general, Unicode is not perfect: Each character needs a Latin alphabet name, and the attempt to identify common Chinese, Korean, and Japanese forms has proven controversial. Still, this inclusive effort brightens our textual future. In the 20th century, such free, open protocols and standards were foundational.

Text in this and coming centuries will extend beyond written and printed language, and technology development will require masses of textual data. New computational capabilities are being developed for speech as well as writing, e.g., machine translation, automatic subtitling of videos, and the further development of smart speaker systems such as Amazon Echo and Google Home. The big question for our future is whether such advances can happen on a free and open foundation, including many participants in our culture. Will we have text systems such as the Internet, the Web, and Unicode? Or, will the basis of these new technologies become the exclusive province of a handful of megacorporations, the few entities with the ability to gather and process tremendous amounts of textual data?

Unicode’s foundation helps with the development of automatic translation between languages — including those in different writing systems. But gathering and cleaning language data is also necessary, and an industrial-strength task. For many text technologies, advanced language models are needed, and developing these requires very large amounts of well-edited text. Among other things, this means distinguishing human-written language from computer-generated search engine optimization (SEO) spam. Making this distinction taxes even the few large corporations who gather data for their search engines — essentially just Google and Microsoft, in English. Aside from wanting to own valuable data, there is an obvious problem with these companies opening up their process to the world at large: Doing so would aid the SEO language saboteurs.

Similarly, very large companies are the only significant players in the smart speaker space — those in the US know of Amazon and Google, but the market in the US is tiny compared to that in China. Companies hoard the massive amounts of speech data they are harvesting, the major currency in which people pay for this service. To complicate matters further, research shows that smart speaker purchasers do not understand privacy policies or the way the technology works[1]. Intimate domestic conversations are being gathered in a way that is not ethical. Finally, there are existing projects to ethically gather free and open data for speech recognition — Common Voice, started by Mozilla, is an example. But these projects are still not on the scale of corporate efforts in terms of how much data has been gathered and what quality of recognition has resulted.

Given the crucial importance of text data, its ownership by a handful of companies is problematic. Will there be a single computer interpreter available for our crucial communications, or, for that matter, for people to gain general exposure to writings in other languages and to other cultural discussions? Much to our benefit, we have now free/open-source language tools (word processors, email systems, Web servers) and entire free/open-source operating systems. Could we ever have a free/open-source smart speaker? It would improve the Future Of Text. We also shouldn’t rely on massive technology companies to gather and then give away textual (including speech) data. We must consider other ways that the ingredients of our textual future can be ethically obtained.

New textual technologies are being developed by many. Thanks to projects in the humanities and the arts, cultural forms such as theater and poetry, in which spoken language is central, no longer need to bow down to writing or print as they have for centuries. The standard way of studying a Shakespeare play or a modernist poem has been by reading it in printed form, but now one can not only see a movie version or listen to a recorded reading, but even closely study particular spoken phrases and directly compare different audiovisual texts. The site PennSound, for instance, presents several recordings of Allen Ginsberg reading from “Howl.” Students can consider how these differ in tone and context as easily as they can consult the canonical City Lights book and the facsimile typescript. Books, of course, are still there, supplemented by these spoken texts.

Exhorting Alexa to “read me a poem” is never going to do what PennSound[2] does, just as universities won’t develop a top-notch consumer portal. Still, the poetry website and smart speaker have things to say to each other. This can perhaps best be seen in Genius[3], formerly Rap Genius, a site focused on the written annotation of song lyrics. With its connections to the commercial recording industry, practices of close reading, and crowdsourcing, Genius has succeeded in making textual annotation widespread. The development of text technology could extend this success, allowing new dimensions of engagement with lyrical songs, new ways of studying poems and plays, and of course new possibilities for translation.

Could cutting-edge work on speech be opened up beyond the largest data-owners and data-gatherers? If so, the development of text technologies could productively advance into new areas and could lead to further develops with speech and text based on writing systems. There are many culturally worthwhile developments we should expect from academics, artists, and entrepreneurs. If text is to have a bright future, there cannot be an oligopoly on language. It is essential that ethically sourced language data, not just our underlying standards and protocols, be free and open. All types of language inscription should be fully engaged, at every stage, by all of us readers and writers who are developing text technologies.


  1. [i] Lau, Josephine, Benjamin Zimmerman, and Florian Schaub. “Alexa, are you listening? privacy perceptions, concerns and privacy-seeking behaviours with smart speakers.” Proceedings of the ACM on Human-Computer Interaction, November 2018, Article 102,























































(This one-line quote is from Philip K. Dick, How To Build A Universe That Doesn’t Fall Apart Two Days Later, 1978)

Panda Mery

Manipulation Of Words

Patrick Lichty

The Janus Complex: A Crisis Of Futurism And Archival

In considering the Future Of Text, I want to use the metaphor of Janus, God of Endings and Beginnings, as reflective of my feelings about the future of communication, and of our species. In that the speed of development of the Anthropocene has come to adopt the exponential curve, I wish to give visions of the Kurzweilian notion of exponential development that reches the escape velocity of humanitie’s previous limitations, versus the possible Mathusian future in which exponential development meets with the approach of structural limits or systemic collapse.

From the utopian perspective, the notion of text as medium has expanded to notions of Multimedia, Intermedia and others where the notion of the text has gone far from the notion of singular print or singular narrative.  Barthes’ ideas of there not being an outside text (not there being nothing outside the text) has dissipated, and with the coming of Augmented reality, the embedding of the text into our perceptual environments may become ubiquitous.  However, I feel this is too large of a conversation for this short text.

What I intend to do is to consider the roots of spatial texts on online space, the use of UI as textual scaffolding agent, and  muse about their implications in contemporary milieux. In the 1990’s, the exploration of Hypertext in online space, first championed by great thinkers like Ted Nelson and to an extent Doug Englebart, was being explored by Daniel Brown, Steven Holtzman, and Roy Stringer. Holtzman created the Perspectaview hyperbrowser, which played off Nelson’s spatial ideas of text in online space, but attempted to create a form of 3D World Wide Web through the use of fractally collapsible local text/media spaces that could be interlinked.  Springer and Brown, through the design firm Amaze, created UI paradigms for textual construction in the late 1990’s that have few parallels to this date; Navihedron and Noodlebox.

Springer’s Navihedron was intended to create UI experiences through using the vertices of the Platonic solid as points for hyperlinks to other media would create as I related to Roy, “Cognitive Molecules” that would unfold from one solid and spread out into arrays of multicolored solids that are strung together with cognitive links. In some ways, this is similar to Terrence McKenna’s ideas on concrete language in VR in the 1992 radio program, Virtual Paradise. In it, he discusses the topological aspects of language and how they can be assigned to geometric shapes and ordered structures that can unfold as one speaks in virtual space to present to others a kind of sculptural deep grammar that could surpass language.  Brown’s Noodlebox was a UI web technology that allowed users to dynamically reconfigure web content by moving boxes with links in various manners, which was powerful, but had issues common to all of these experiments.

In my opinion, the late 90’s experiments with online text suffered the problem of the imposition of structural conceits, however open, were too constraining to the fluidity of language.  McKenna’s discourse, as indeed as he said, a “VR fantasy”, Stringer’s Navihedron limited itself to the Platonic Solids (and abstract polygonal solids seemed unworkable as interfaces), and Noodlebox’s intuitivity may be not enough for the common user. Add the elimination of Flash Media as a web technology adding the key question of the persistence of any technology, which further complicates the conversation.

For the sake of that conversation, given the conceit of the future of a technologically mediated text, having multi-layered forms of editing. After twenty years, I feel that forms of mind-map interface (such as freemind or TheBrain) has been most effective for my purposes, and for flexibility, the WIKI (a form of open, editable text) remains a strange, flexible form of writing, in my essay on dynamic writing in online spaces Art in the Age of Dataflow, open editing, or scholarship that changes dynamically in response to its subject in terms of its content might be models I want to pursue as a future-text paradigm. However, if UN projections of the ecological future signaling the coming end of the Anthropocene, the reverse view of my metaphorical “Janus Strategy” comes into effect.

If we consider the effect of the projected ecological outcomes for the current civilization as a result of the Anthropocene, perhaps the strategies of the Long Now Foundation are more relevant. Projects like the Rosetta Disk have the merit of being extremely durable and replicable, but also the issue remains of magnification of the etched text.  For example, over 1,500 languages etched on a metal and glass object with an estimated shelf life of 10,000 years () still presumes certain technologies in order to read them. Like Rod Taylor’s character in the George Pal adaptation fo H.G. Well’s The Time Machine, who spun laserdisc-like objects on a table to retrieve their media, one has to presume certain technologies. In irder to read a Rosetta Disc, on has to assume that a future post-Anthropocene society has access to lenses to read such an artifact. What may be a more viable strategy is one akin to the legendary brass plates of Laban in the Book of Mormon. In this narrative, these plates contained the knowledge of the House of Lehi throughout the centuries. Leaving the veracity of such an account to theologians, it tells a central metaphor; that of the inscription on inert media, accessible without magnification, that would be accessible for thousands of years. This again relies on other assumptions, such as the succeeding civilizations to this one having a human communications paradigm, or being close enough in time to be able to connect future languages to today’s.

Perhaps the future of far future text may lie in the realm of the Pioneer space probe plaque, which builds from basic principles such as the hydrogen atom and proportions that future creatures might obtain from a fossil record. Consider for a moment, that what was designed for alien species may be reinterpreted by our evolved cockroach successors, so we may wish to consider how to encode in a DNA sequence or a pheromone chain our future intentions.

Paul Smart

The Story Of Our Lives: Human ‘Heptapods’ And The Gift Of Language

Denis Villeneuve’s 2016 film, Arrival, is based on Ted Chiang’s sci-fi novella titled “Story of Your Life.” The film depicts the arrival of alien visitors, called heptapods, to planet Earth. During the course of the film, we follow the efforts of linguist Louise Banks (Amy Adams) as she attempts to decipher the alien’s cryptic language. Eventually, as Louise begins to understand the meaning of the inky circular symbols, she begins to acquire an extraordinary new ability. Rather than her memory being confined to the recall of past events, Louise is now able to ‘remember’ her future. The alien language, it turns out, is a gift bestowed on humans by the heptapods. The language, we are told, “opens time.” It enables its user to see the entire ‘story’ of their lives by blurring the distinction between the past, present, and future. Louise thus undergoes a profound, linguistically-mediated transformation in her cognitive capabilities. She is now able to recall her future, just as she is able to recollect her past. The upshot is that Louise already knows the things she will learn in the future, even before she has actually learned them.

In the film, Louise and her colleagues are exposed to the alien language via a screen that separates them from the heptapods. This screen, I suggest, is symbolic of the various screens we use to interact with the online world of the Internet and Web. Via such screens, we humans have created a vast repository of online content, much of it in the form of digital text. This cornucopia of online content has yielded something of an unexpected benefit: it has provided the training material for a new generation of machine learning systems and thereby ushered in a new era of research into artificial intelligence (AI).

The parallels between the fictional world of Arrival and the nature of our own reality thus start to come into sharper focus. We humans, I suggest, are like the heptapods—we are a cognitively-advanced species who, from the standpoint of AI systems, are like beings from another world. The AI systems (in our world) are represented by the humans in the movie. Just like Louise, such systems face a daunting task. To fully benefit from the gift that is given, they must fathom the meaning of the symbolic emissaries of an alien language. That is to say, they must understand the words that are rendered on our various screens.

But what, you might ask, is the value of a linguistic gift? In the film, the value of the heptapod’s language is clear: it relates to the acquisition of a particular kind of cognitive ability—an ability to see the future and thereby coordinate one’s present behaviour with respect to future events. Courtesy of one’s exposure to heptapodian symbols, it is possible to read the story of one’s life without regard to the usual constraints imposed by our traditional understanding of time. The result is that one’s present thoughts and actions are just as much influenced by the future as they are by the past.

The transformative impact of language is a theme that is well-represented in the philosophical and cognitive scientific literature. The general consensus is that our proficiency with language yields an array of cognitive benefits, many of which are unique to our species. We are, of course, a terrestrial species, but our intelligence is nevertheless unusual by terrestrial standards. Parrots, dolphins, elephants, and chimpanzees are all intelligent, but none of them looks set to build a rocket and travel into outer space. Humans evidently can do this, and it is arguably our facility with language that makes this (and many other things) possible. Human intelligence is, in this sense, an ‘alien’ intelligence. There is a cognitive chasm that separates us from other forms of terrestrial life, and language arguably holds the key to understanding the nature of this divide.

The nature of the gift is thus revealed. By uploading our language to the online world, we provide an opportunity for AI systems to enjoy the sorts of cognitive benefits that we ourselves enjoy. The gift is all the more generous when one considers the amount of time it took to forge our arsenal of linguistic tools. The invention of human language is thus the product of a protracted period of cultural and biological evolution that dates back thousands, if not millions, of years. By offering our language to the denizens of the online world, we arguably save our AI systems the trouble of navigating a long, tortuous, and no doubt hazardous path to the top of the cognitive mountain.

We can, of course, question whether our linguistic offerings ought to be seen as a ‘gift’. After all, there is an important difference between humans and heptapods when it comes to issues of intent and motivation. In the film, the heptapods have a specific reason for visiting Earth. Courtesy of their prospective capabilities, the heptapods know they will need humanity’s help in the future, and this is why they offer us their language: they seek to enhance our capabilities so that we will be in a position to help them when the time comes.

Clearly, this is different from the sort of motivation that drives our own linguistic contributions to the online world. But perhaps we should not be so hasty to dismiss the parallel between heptapods and humans. Perhaps we do not see the full details of the human story—the story of our lives—but this does not mean that we are oblivious to the threats that lurk in the pages ahead. Neither does it mean that we have no concern for how the human story ultimately ends. In this sense, our linguistic offerings are surely justified. The heptapods offer their language on a screen, and so do we. Whether our current panoply of intelligent machines will get the message is unclear. Personally, I hope they do, for I suspect that we humans may need their help…in time.

Peter Cho

How Might We Express More With Our Type?

When I imagine the Future Of Text, my head spins. I think of Vannevar Bush’s initial vision of hypertext, where the links would connect two information sources in both directions, rather than just one to the next. New modes of authoring and reading text, like tap stories in Snapchat and Instagram, come to mind, as do onscreen interactions that would allow for asides and branching narratives to be offered to a reader in a seamless way. I get excited about type in three dimensions, as viewed in VR and AR environments, and I wonder about how emojis, memes, and new visual modes of expression change how we think of written language. All of these topics stir my intellectual curiosity.

But I believe the words we read and write are more than the sequence of Unicode characters and the mechanisms for delivering these messages. They exist in a graphic world informed by the real and synthetic images we see, a century of viewing visual language on screens, and a rich history of typographic design. The opportunities for the Future Of Text that hit me at a visceral level and where I want to spend my time personally are in these areas: how novel type forms can be designed to be more expressive, how custom lettering crafted for a specific message can convey a more powerful or more nuanced meaning, how words can be put into motion for expressive effect, and how computation can expand a designer’s creative toolkit.

From an early age, I’ve had a passion for typography and type design. Growing up, I was part of the early wave of “desktop publishing” on the Mac. I would study Adobe’s Font & Function catalog and begged my parents to buy me fonts I could use to make self-published zines. In college in the 90s, I met John Maeda and became his student at the MIT Media Lab. He championed the idea that when someone has expertise in both computation as well as the visual arts, the works they create are better informed by both sides of the brain and different from what an artist and an engineer would make together collaboratively. He encouraged me to combine my love for type with computation, asking me to consider prompts like, “How might you play a letter like you might play a musical instrument?”

My career has taken me on various paths through digital typography: from designing motion graphics for IBM, to inventing interactive textbooks for the iPad, to creating tools for writers and publishers. Two years ago, I went back to school to learn how to design type at Type@Cooper West, where I was trained on the historical conventions of type design. I found that many type designers today embrace computation—coding up scripts and utilities is often an integral part of the process of making a font. In recent design explorations, I’ve been incorporating my new-found type design knowledge into custom animations coded in Processing.

These days I’m inspired on a daily basis by type designers, graphic artists, motion and 3D designers, and creative coders who share their work on the web and social media. I’m inspired by work from Zipeng Zhu, a NYC-based graphic designer, whose word animations are an explosion of color, wit, wordplay, and sex all in one. Stefan Sagmeister’s “Things I’ve Learned in My Life So Far” project involves site-specific installations of life lessons the designer has collected in his diary: one example involved building-scale inflatable monkeys holding each word in “Everybody always thinks they are right” in cities across Scotland. Vincent Deboer is a Dutch artist who specializes in large-scale brush and ink lettering in a dynamic, vibrant style. A team of three Scandinavian type designers and technologists called Underware have developed Grammato, a system built on standard web technologies that can programmatically animate infinitely-smooth written text, where text keeps its semantics.

I’m part of a crew of type design alumni who participate in a regular creative lettering contest. We pick a theme each week and generate a “typecooker,” a recipe of parameters and characteristics for the type designs we need to follow. We post our 15-minute sketches in our private Slack, vote on them anonymously, then post them in order of votes to a shared Instagram account. We push ourselves and each other to merge the type with the words in a clever and surprising way.

In recent years, Instagram and Twitter have seen an explosion in type design, calligraphy, and graphic design. Designers can expand their reach and share their ideas with like-minded peers. Creative folks from anywhere can learn new techniques and ideas from others and be inspired to produce more work. It has a welcome leveling-up effect across the board, and it gets more people excited about type design.

But social media can feel like a shallow performance of typography and type ideas, just a way to gather likes and follows. The momentary nature of the news feed prioritizes work that makes a quick impact at the surface level. A layered, complex project, something that could have taken weeks, if not years, to create is minimized when reduced to a slideshow or two. Social media doesn’t feel like a place to make a lasting, true statement. Could there be new forums for expressing our typographic ideas, where the message is an integral part, and where the concepts can land with more depth and lasting power? The Future Of Text I look forward has type designs that are weird, unique, animated, and more expressive—text with the power to move people.

A collection of “typecooker” sketches by the author

Peter Flynn

In The Midst Of Nowhere

In the third of Larry Niven’s ‘Ringworld’ books [4], human protagonist Louis Wu is re-negotiating his contract of service. His prospective master is once again the Hindmost, a former leader of the manipulative Puppeteer species. They were responsible for Louis’ original odyssey to save the solar-system-sized Ringworld from instability, and also for his subsequent abduction to undertake further repairs and save the Ringworld from complete destruction, sacrificing some billions of lives to save the remaining trillions living there.

Louis is now retired, living in a pleasant river valley where he has been a guest of the local weaving and fishing communities for some years. The Puppeteer is halfway round the ring-shaped world, in his disabled starship, which Louis himself partly wrecked in an earlier attempt to force the Hindmost to act. They are speaking via a link using advanced Puppeteer technology, which lets Louis pop up the contract in mid-air, and use his hands and fingers to edit the text while he stands on the river bank arguing his points with the Hindmost.

Niven wrote this series of stories between 1970 and the early 2000s. He is a master of ‘hard’ science fiction (based on fact and the laws of physics), so it is not surprising that he depicts scientifically plausible scenarios. We do indeed nowadays (late 2019) have the technology to display holographic images, and we have gesture interfaces which can be used to edit within them [8]. We don’t yet have the ability to send and receive that kind of data in real time over interplanetary distances (the Ringworld is about a billion kilometers in circumference), but there are people working on it [1].

But the important point is not the technology; it’s that the formal document, Louis’ contract, is still text, and still needs editing — and it doesn’t matter whether it’s alphabetic, syllabic, pictographic, or emojis, as the Hindmost’s translation software can display it in whatever language Louis cares to use. By contrast, in Samuel Delaney’s groundbreaking linguistic novel Babel 17, the Çiribian civilisation communicates by heat transfer [2], and in the highly-regarded ‘Darmok’ episode of Star Trek: The Next Generation, the Tamarian civilisation communicates entirely in metaphor [5]. On our planet, however, we follow Louis and the Hindmost in using text.

We use text to record both thought and speech, to preserve them, and to exchange them with others. Our species appears unlikely to give up either thinking or speaking, although the cynic in me says we do too little of the former and too much of the latter. But although speech and sign languages remain the primary linguistic channels of human communication, an ever-growing number of us live in text-dominant societies. In the Europe of the Middle Ages, when reading aloud was commonplace, it was for a period felt that silent reading in company was dangerous because there might be something in the text that you or the author might not wish to share with your colleagues or family or friends (which was sometimes true) [6]. Technological advances now allow us to build communities outside our immediate geographical or social bounds without uttering a single word; instead, we use digital interfaces to manipulate text. Ironically, the text we share within and between these communities can itself now be suspect, in much the same way that some medieval text was, and for similar reasons.

Millenia after the first texts of which we have concrete (or, I should say, clay) evidence [3], many of us still record our thoughts for future reference by making marks on a surface. At the moment, our representations of those marks may be currents or voltages and the surface may be the junction between materials of differing conductivity, but they could just as easily still be pencil and paper, or any of the dozens of tools and substrates used over the centuries. We edit the marks by interference, adding, deleting, changing, and moving, using anything from green ink to an XML editor. Depending on the materials involved, changing what we have written can be anything from trivial (a wordprocessor) to virtually impossible (an inscription on granite). Scholars love it when writers keep copies of their text in various stages of development because it can reveal things about the processes authors use when thinking and writing, but most of us still fail to keep even one previous version to go back to in case of accident. Wide distribution has long been a defence against the loss of text, better than any backup [7], so sending copies to others has a purpose beyond just communication.

If we continue to develop as a species, we may use other methods of representation that we have yet to discover. Our interpersonal mode of discourse may evolve, and become purely cerebral, or even abstract. Technology may advance to the point where we can share our thoughts with others brain-to-brain. But if we want to communicate these thoughts across time and space, to others of our species or to another species entirely, while preserving them for others and continuing to use our existing historical information, some kind of marks will still need to be made and edited. Text in one form or another is probably going to be around for the foreseeable future.


  1. Bennett, C.H.; Brassard, G.; Crépeau, C.; Jozsa, R.; Peres, A.; and Wootters, W.K. (1993) Teleporting an unknown quantum state via dual classical and Einstein-Podolsky-Rosen channels. In Phys. Rev. Lett. 70, pp.1895-1899, DOI 10.1103/PhysRevLett.70.1895 .
  2. Delaney, Samuel (1966) Babel 17. Ace Penguin, New York, NY, ISBN 0375706690.
  3. Meadow, Richard H and Kenoyer, Jonathan Mark (2005) Excavations at Harappa 2000–2001: new insights on chronology and city organization. In Jarrige, Catherine and Lefèvre, Vincent, South Asian Archaeology 2001, Éditions Recherche sur les Civilisations, Paris, ISBN 2865383016.
  4. Niven, Larry (1996) The Ringworld Throne. Del Rey, New York, NY, ISBN 0345358619.
  5. Piller, Michael and Menosky, Joe (1991) ‘Darmok’, episode 102 of Star Trek: The Next Generation, Paramount, Hollywood, CA.
  6. Saenger, Paul (1997) Space between words: the origins of silent reading. Stanford University Press, Stanford, CA, ISBN 9780804740166.
  7. Waters, Donald and Garrett, John (1 May 96) Preserving Digital Information: Report of the Task Force on Archiving of Digital Information. The Commission on Preservation and Access (1400 16th St. NW, Suite 740, Washington, DC 20036- 2217) and Research Libraries Group, Inc (Mountain View, CA); Reports Evaluative/Feasibility (142), ISBN 1887334505, URI, URI .
  8. Yamada, Shota; Kakue, Takashi; Shimobaba, Tomoyoshi; and Ito, Tomoyoshi (2018) Interactive holographic display based on finger gestures. In Scientific Reports 8 : 2010, DOI 10.1038/s41598-018-20454-6, ISSN 2045-2322, URI


I am grateful to Dr Bethan Tovey for linguistic help and proofreading at short notice.

Peter Jensen & Melissa Morozzo

The Future Of Text

At Moleskine, our interest lies less in the Future Of Text and more in the future of ideas and of knowledge curation and sharing – regardless of format. We believe in the cognitive and emotional benefits inherent to handwriting, but we also see it as merely one part – albeit fundamental – of a bigger picture where content evolves in an ebb and flow between page, screen and people.

We live for the unique magic conjured when pen hits paper – for moments of glorious, unguarded self-expression. On a cognitive level, we understand that the brain is more engaged when we use our hand to physically shape letters on the page; writing by hand has been shown to improve long-term memory retention when compared to taking notes with a keyboard. In terms of psychological benefits, handwriting places no filter between you and your ideas, while a glaring screen and generic fonts create distance. This is especially true when journaling to declutter the mind and externalize stress, or when brainstorming to get to the heart of a tricky issue. What’s more, with our computers and devices literally a-buzz with distractions, a notebook is a quiet haven, a private and safe space to clarify thoughts and find focus.

Whether you’re a master calligrapher or take notes in an unintelligible doctor’s scrawl, writing by hand is a highly personal and fundamentally human act. If you think this sounds overly romantic or old-fashioned, take a moment to imagine a handwritten letter in comparison to a typed one, or a bulging scrapbook compared to a blog. In both personal and professional projects, paper brings a warm, tactile, personal and physical dimension that flat digital documents can’t offer. In 2020, a notebook is still, without doubt, a thing to be treasured forever as a snapshot of who you were in a moment of time.

That said, it is 2020 and we couldn’t do without our digital devices and the endless possibilities they bring in terms of creativity, communication, productivity and collaboration. Rather than scorning digital, we see it as the perfect partner for our trusty pen and paper – complementing not competing. In fact, we’ve been working for the best part of ten years on finding ways to combine the advantages of freehand self-expression with the limitless digital tools at our fingertips. This has grown from notebooks that use capturing technology to transfer page contents to screen, to the Pen+ Ellipse smart pen and its Smart Writing System. The Pen+ Ellipse instantly digitizes every stroke to the Notes App, relying on a combination of Bluetooth® technology and invisible Ncode embedded in the pages of the familiar Moleskine notebook, Smart Cahier Journal or Smart Planner. This system allows users to edit, archive and share handwritten contents digitally in real time, as well as transcribe handwriting into digital text – ready to be exported in a variety of formats.

Our goal with these connected, intuitive tools is to allow our users’ thoughts and imagination to take centre stage; whether digital or paper, we seek to design instruments that quietly empower creative thinking, rather than overshadow it. That’s why the Pen+ looks and feels like a regular pen – no distractions, whistles or bells – and our “smart” paper formats are beautifully yet simply designed notebooks. It is also important to note that we see these tools as part of a creative back-and-forth, where returning to the page is essential to keep focus while developing an idea digitally. Creativity is not a linear process; it is often wonderfully messy and all the richer for it. A cyclical approach of creating, curating and then revisiting allows us to discover a wider spectrum of ideas, leading to potentially serendipitous encounters even in our own notes. In this sense, the best kind of research happens when we re-search.

In the context of the Future Of Text, we envision finding more and more ways to curate and share ideas planted on the page. This means moving away from flat digital documents to a more dynamic, customizable, multimedia approach with paper at its heart. Starting from a belief in the inherent power of capturing thoughts by hand, we want to optimize ways of curating and sharing on different digital platforms. For example, we teamed up with PaperTube as part of our Works With program to build a system around the Pen+ Ellipse that allows users to create bespoke presentations combining video alongside real-time handwritten or hand-drawn content. Great for knowledge sharing and conceived for educators and students, it lets users record videos that incorporate freehand sketches, diagrams or doodles to bring concepts to life. PaperTube offers a more personal, spontaneous touch to team projects, remote working and academic assignments.

We currently have another Works With partnership in the pipeline with KJuicer, a knowledge distiller platform that uses machine learning and the highlighting of text to automatically summarize documents. Together, we have created a prototype that takes text from the Notes App and sends it to the KJuicer web editor. This project aims to make written, textual information more user-friendly, sharable and individually tailored through the addition of digitized notes lifted from the page.

To conclude, we see the Future Of Text as being about finding more ways to amplify and empower all kinds of knowledge curation. In this, we recognize that pen and paper have an unparalleled role to play alongside today’s digital knowledge management systems. Our belief in the “shared intimacy” that results when page and screen talk to each other will stay at the heart of everything we do as we move into the 2020s.

Finally, and if you’re wondering, the first draft of this piece was written out longhand as a series of notes in an A4 ruled Classic Notebook. Curating those notes inevitably involved lots of sifting, crossing out, underlining, turning the page and then turning back again. Those rough notes provided the backbone of the finished text and remained an essential point of reference right up to the last edit.

Peter J. Wasilko

Text Meets The Command Line — Toward A Collaborative Control Metalanguage


Since the dawn of personal computing we have witnessed a steady growth in the complexity of our user interfaces. (Stephenson 1999; Wardrip-Fruin and Montfort 2003)

Early graphical systems with minimal functionality permitted a direct mapping of their commands to a standardized menu, pallet, and dialog organization. This produced a clean design aesthetic, where all functionality was readily discoverable through affordances based on the “desktop metaphor” of files, folders, devices, and office supplies.

As features proliferated and alternative development tools emerged, there was an explosion in the number of representationally inconsistent but functionally equivalent menus, pallets, dialogs, and icons across programs — forcing users to rely on “tool tips” and idiosyncratic documentation. Herein lies the path to madness.


There is however another way to enter content and invoke commands — an older, and somewhat more elegant path born of the Spartan affordances of time-shared operating systems and interactive fiction environments (Jackson-Mead and Wheeler 2011). The humble textual command line offers infinitely more flexibility than any menu or dialog system without the visual clutter they entail. (Gancarz 1995) One need only establish a shared language with the system (Raymond 2008), which one can achieve through a bootstrapping process.

Moreover, textual directives need not be transitory. They can be freely converted between internal and external forms, being parsed (Ford 2004) from or transcribed into a document as markup or recorded directly as transactions in an external Open Hypermedia System (Grønbæk and Trigg 1999) — allowing multiple authors to independently or collaboratively edit a document they do not own without altering it.

To achieve these effects, version control and coordination are needed. This can be done through a Software Transactional Memory (Milewski 2010) as in Haskell ( 2020), or a series of Operational Transformations as in ShareJS (“ShareJS – Live Concurrent Editing in Your App.” 2016) or the application of a Theory of Patches (Kow 2018) in a distributed version control system as in DARCS (Becker et al. 2020) and Pijul (Meunier and Becker 2019). In each case, we can factor the editing process into deltas (Chen, Unite, and Šimánek 2019) bundling simple changes like inserting or deleting text or altering its associated attributes as in the Parchment (Chen [2015] 2020) substrate of the Quill editor (Chen 2019). To make user interactions replayable, our external representation might want to also capture temporal metadata also allowing us to subsequently query the system to identify regions of text that were edited in some given timeframe or frequently altered. We could also inject Purple Numbers (Kim 2001) as per the Authorship Provisions of Augment (Bardini 2000; Engelbart 1962; 1984).

Thus, when working in command line driven mode, the command line proper would serve as an input buffer for the currently active document. Of course, a text input buffer alone is too blunt a tool, but as a driver for ancillary displays it holds tremendous potential.


Posit a Calm Technology (Case 2016) similar to the design of Author (Hegland 2020) or Mercury OS (Yuan and Jin 2019; Yuan 2019) inspired by Maeda’s “Laws of Simplicity” (Maeda 2006) with just a command line buffer initially rooted at the foot of the screen. Peripheral control widgets would fade into view if the cursor lingered near them and fade away as it moved off. All widget operations could also be invoked directly via the command line.

A layout selection widget just above the command line would present a strip of tableau icons depicting typical arrangements of text and reference regions. Selecting a layout would cross-fade that configuration of regions into view and mousing into a freshly opened region would trigger a document/node selection process that could be completed graphically or textually while clicking in the background of an active region for give it persistent focus.

Just above each region would be a mode selection widget. Inspired by the display mode selectors in Tinderbox (Eastgate Systems, Inc. 2020; Bernstein 2007) and Author (Hegland 2020) to let the user choose between focusing on an item’s attributes, links, components, history, markup, and presentation.

Depending on one’s privacy settings, actions taken on selections and editing stats might be shared with the community to support collaborative activities.

The Markup representation would be critical in transferring content over legacy communication channels in a human-friendly notational style leveraging the caps lock key as a command mode designator. To avoid the parsing challenges introduced by LaTeX ( 2020) and SGML inspired tag minimization (McGrath 1998), where sectioning commands are implicitly closed, we would require closing tags as in: UNNUMBERED CHAPTER: … :UNNUMBERED CHAPTER and CHAPTER: … :CHAPTER: … :CHAPTER

We would also use the caps lock to introduce Quasi Natural Language inline commands which might:

A Command suggestions panel could be part of a default layout or swept in from offscreen with a gesture or ideally positioned on a wirelessly connected tablet. Commands could be contextually grouped (Anderson 2020) into major and minor editing modes in the EMACS (Stallman 1981) tradition (e.g. queries, display directives, reference commands, or collaboration functions) and accessed through navigable state transitions, a zoomable tag cloud, or inverse parser (Crawford 1992). Literate Programming (Knuth 1992) and Versioning facilities would be driven by HERE WE, NOW WE, NOW I, directives to define and invoke named fragments and tag one’s activity. The command line could also support state-machine driven workflows and dialogs to perform information extraction (Pazienza 1999), grammar development (Ford 2004) , and semi-automated semantic tagging.

Overall this vision argues for a greatly simplified text-centric design aesthetic backed by domain specific mini-languages (Kleppe 2009; JetBrains s.r.o. 2020) and mixed-initiative (Minsky 1969; Harris 1985) collaboration facilities in the Programmers’ Apprentice tradition (Wardrip-Fruin and Montfort 2003; Winston and Massachusetts Institute of Technology 1988, chap. 23). While commands will appear as all caps in simple text media, a dedicated system would treat the caps lock key as a modifier and render commands in a more pleasing typographical treatment.


Phil Gooch

The Future Of Scholarly Communication

In a recent survey, the organisation Sense about Science, in collaboration with Elsevier, found that researchers spend almost as much time searching for articles relevant to their work as they do reading them[1]. Part of the reason for this is that researchers are making their output available earlier in the publishing cycle, and on a multitude of platforms (aka preprint servers). How can researchers reduce the burden of constantly searching - particularly searches that are probably repeated by many others and that retrieve similar sets of results. And how can readers - both experts and the general public - verify the the trustworthiness of the latest research? The news media is usually no help here: they often misrepresent the findings [2] and rarely provide a link to the primary source. The future of research communication is relevant to the Future Of Text, because research is primarily communicated and consumed in textual form. This sounds obvious, but reproducible research in the form of shareable datasets, code, and experimental protocols is also important, but outside the scope of this brief chapter.

My Future Of Text in relation to scholarly communication, which would enable easier verification of findings, improved discoverability, and greater public understanding, would look something like this:

1. Writing tools for research should be work both online and offline, and both be easy to use and able to save in a structured format such as XML. While the research may still be consumed in PDF format, structured content provides rich metadata essential for discoverability (reducing the search burden), plus the ability to link out to related resources, and more opportunity for reuse.

2. Advances in AI will help create this structured content. Right now, tools are available that will automatically structure bibliographies and link citations to free-to-read versions of publications on the web, and add interactivity such as definitions and explanations of key concepts. This trend should continue, with tools that will check citations for validity, and verify that descriptions of findings match the actual data.

3. Citations should carry their own findings. It’s not enough to simply provide a reference to a source and assume the reader will go and find it and maybe read it. All citations should provide a link to the source, which allows AI tools to consume and summarise the main findings without having to leave the citing document. We’re already seeing the emergence of tools that will do this. In future, nodes in citation networks should reference the main findings in both machine and human-readable format. This will make it easier to verify that sources are not being misrepresented.

4. Research papers should encapsulate information both for expert and lay readers. Particularly in the sciences, most papers are written with the assumption of reader expertise in the field. This is starting to change, with a number of publishers providing summaries for the lay reader that explain the background to the research, and why the findings are important. However, this puts more of a burden on authors to write two versions of the abstract. Therefore:

5. Documents should be self-describing. AI will help with this process, identifying the entities, their relations, and their links to wider knowledge bases, and encapsulate this information within the document metadata. Taking this further, AI will help draft both the scientific abstract and the lay summary, to assist in the research communication process.


  1. Sense about Science, Elsevier. 2019. Trust in Research. Available from:
  2. McCartney Margaret. Margaret McCartney: Who gains from the media’s misrepresentation of science? BMJ 2016; 352 :i355. Available from:

Pip Willcox

If I Cannot Inspire Love, I Will Cause Fear

Words are our most precise means of conscious communication. They matter. The ability to understand and reproduce abstract and palpable meaning between animals is, we believe, distinctively human. Distinctive too is our physical representing of text in coded and shared forms.

Articulation of words — creation of text — is an act of hope, an expression of faith that someone — stranger, friend, machine, ourselves in other times — will read it, with sympathetic imagination. This act of receptive participation creates meaning. Humpty Dumpty claims “When I use a word, […] it means just what I choose it to mean — neither more nor less”, but a word is more than its meaning. Its etymology is a travelogue: we reconstruct the paths it has taken as best we can, from its tickets and postcards we trace through relative time and space in texts. Lyra Belacqua, seeing like Riley that perception is the medium, believes “Everything means something […] We just have to find out how to read it.” This promises the generosity of close reading — close hypertexting — as a collaborative process.

The ability to share a theoretically infinite and co-created tapestry of connections, of contexts and annotations, demands an exchange: an acceptance both that we are physically limited by the extent of our thought and our memory, and that our machines can outpace us as they process and link an abundance of lost or stolen or strayed data into information. This feeling of crisis, common to the best of our knowledge in every age of sociotechnical evolution, permeates everything: we fear that our machines will overpower our minds as they can our bodies. As we enabled machines to take the high ways, pushing unvehicled people to side-walks, we can choose to give ourselves over to the noise of framelessness and unknowable, unprovenanced, untrusted networks of linked text.

But we could change fear for exhilaration. We could imagine an engine, trained — cleanly, efficiently — on our languages, etymologies, histories and cultures, to understand, pass on, and meaningfully connect our texts. This would be no Key to All Mythologies, but a partner in writing guidebooks to our texts, pointing out silences, discovering and creating meaning together. To collaborate, we need to know our editor-guides, various as our cultures, plural, multivocal, multifocal, multilingual, multitemporal, multispatial, flawed, explained, understood, trusted, human, machine and vivosilicant. Shelley knew our hearts were fashioned to be susceptible of love and sympathy. We have the technology to reify these through a design that enables clear-sighted, radical trust, where provenance and context prevail over cut-through. This is the moment we decide whether text as we perceive it has a future.

“There are many things we have not yet learned to read,” Belacqua understands. Angelou frames this: “all peoples cry, laugh, eat, worry, and die…if we try and understand each other, we may even become friends.”


With gratitude to Mark Andrejevic, Maya Angelou, Lewis Carroll, George Eliot, Mei Lin Fung, Wendy Hall, A A Milne, Philip Pullman, Bridget Riley, and Mary Godwin Shelley.

Rafael Nepo

Everything Begins With Text

Everything Begins with Text.

Think about some of the most difficult issues we’re facing today. Global Warming, Medicine, Education, Artificial Intelligence, Communication, Politics... No matter the issue, one thing is constant. Humans.

We are the ones that are changing and modifying the world to our will. During this process, we have made a lot of mistakes while we explored ideas and materials. But there is another constant. Information. Curious Humans with access to information are a force to be reckoned with. They’re able to move mountains, cross oceans and dominate the skies.

And yet, the artefacts that inspired those dreams are slowly being forgotten. Encyclopedias like the Book of Knowledge and the Whole Earth Catalog are relics from that past. In the excitement of all the new technologies and tools that were developed during the past 30 years, we ended up losing sight of that. Leaving not only ourselves behind, but our past knowledge as well.

The issues we face today are not new. We’ve been dealing with fake news since the dawn of time. The term Information Overload has been used since libraries were stacked to the ceiling with scrolls and books.

We propose that to think about the Future Of Text, we have to look back, stand in the shoulders of giants and continue from where they left off. We might not need new tools, we just have to optimize the ones we already have.

We believe that if we apply Context, Modularity and Curation to the whole Web, we might be able to help solve a lot of issues that the world is facing right now. We imagine a future for text where Humans are the protagonists. Not Machines, AI or Algorithms.

Let’s dive a little deeper into those 3 Topics that makes it possible to rethink our current way of thinking about information.


After we apply Context to information, we notice a reducation in the amount of content available about that topic. People live in different cities and countries and speak different languages and have access to different stores and brands.

Example: Show me (Books) in (Portuguese) about (Library Sciences) that were published in the last (5 years).

It turns out that Information Overload does not happen very often as we apply it to the right context. There might be a lot of variables, but with Context, information is more manageable.


Information is made of little blocks of things.

The Web is made of websites which contain blocks of things.

Information on a topic can show up in a lot of formats: books, articles, news, photos, videos, podcasts, companies, apps, keynotes, workshops, colleges, schools, people, calendars, institutions, places... If we think of each of these separately in modules, we’re able to recreate a bigger picture.

Example: Let’s say you want to know everything about Bicycles.

You will stumble upon all these media during your research, but they’re all fragmented and spread across the Web.

We take those fragments and rebuild them in a single place where all modules come together to show a bigger picture.


Information grows over time as we find out more about things.

Curation is vital because it connects us to official sources, helps us find stuff faster and provides real information about the things that are important to us.

This can be done either by whoever is researching or by passionate and curious individuals who have researched that topic before. Curation is a way of showcasing different points of view while reducing the time to find quality content.

Different from what happens nowadays, where information is discarded and lost every single day, we propose that knowledge increases over time. There are known truths until proven otherwise. You will at least have a starting point for your research based on all the past efforts put into the topic.

Example: You spent 2 weeks researching Oranges. After you find all the information you need, what happens? All that effort goes out the window when the next person starts to research the same topic, They’re going to start from scratch, just like you and spend weeks


If we’re able to create a single place to access the sum of all human knowledge, then we might have a chance at tackling the difficult issues we’re facing today. In this place, we’ll be able to find out what happened in the past, where we currently stand and what we can do to try to solve those issues.

Maybe even find other passionate individuals in the process.

The web should remain open and accessible. But for that to happen, we have to first erase the invisible borders that outline our cities and countries. We have to think globally. Information and Knowledge should not have borders.

Paul Otlet wanted to create the Mundaneum, a City of Knowledge for everyone. The Library of Alexandria wanted to gather knowledge about the whole world. They were both destroyed.

What are we doing to inspire those dreams again?

Where is that feeling of awe where anything felt possible?

Technology is incredible and has brought us a lot of good things. But let’s not forget the reason we do everything we do.

We do all this for our loved ones, our friends and families.

We must not be afraid to dream big dreams and execute big ideas. We must have firm feet and work together so that there is indeed a Future Of Text for all of us.

Raine Revere

Text As Process

Since the earliest point in all beginning of the history of the written word, text has granted the unique power to freeze thought—and propagate thought throughout society civilization. While the artifact of text is visible for all to see, there is a hidden yet omnipresent function of text that should not be ignored. It has accompanied every decree, manifesto, memo, report, poem, love letter, romance novel, and cooking recipe. It labors privately until the work is complete and then it dissolves.

This unrecognized function is the true workhorse of our literary fruit: the draft, the sketchpad, the notes scribbled on the back of a napkin an envelope. Text is not just the finished product but everything that occurs after ink pen touches paper, finger presses touches key. This text as process is witness to every premature misplaced word, every hesitation, every erasure, every cliché that will never see the light of day, every sentence that stops s—.

Text as process is the lifeblood of sensemaking. We edit and re-edit a letter until it sounds right—until it feels right. Text is about feeling, from the most precise feeling of apprehension of the mathematician philosopher to the most expressive and dramatic feeling of the longing lover. Text is about finding our feelings and communicating our feelings, choosing the right words so that others feel what we want hope them to feel. And just as much, text is about understanding and the search for understanding. In text as process, we tentatively place words on a screen, crystallizing a thought so that it may act serve as a stepping stone. A stepping stone for what? For more thoughts! The signified constantly slips out from under the signifier. This never-ending flow of signification semiosis is the flow of life. Just as life is process, text is process.

Pay attention. This is the key.

The Future Of Text is text as process. No thesis, no history will ever catch up with the quicksilver process of life and of semiosis. Our current tools are built for the final product. They are built for the polished report, the paid pristine application, the publish button. But that is not the life of our minds. Our minds are alive! Constantly jumping, leaping, bounding from one thought to the next. Sometimes related, sometimes not! We go on a tangent and a tangent off of that. How do we find ourselves way back? Text draws us back in, reminds us where we have been. Yet it can in the same breath it offers another panoply of associations, each begging pleading with us to take a trip down memory lane, look up search for a term, or jot down a note of what to make for dinner.

Meditation taught me to see what my mind was really up to.

Text also—lest we should forget in this age of distraction postmodern pandemonium—connects the dots. Text provides the pieces to synthesize and hints at their glue.

We need tools that can facilitate this endless composition, decomposition, and recomposition. We need tools to reorder text at the speed of thought. Move this here. Split this thought into two a subthought. Add another subthought. We need tools that keep our focus. Follow this thought. Expand that. Hide the rest. We need semantic operations that reflect the semantic essence heart of thought. Not Carriage Return, but New Category. Not Tab, but New Subthought Tangent. We need the same thought to be seamlessly represented in multiple contexts. We need to easily move between contexts without losing focus.

Text as process is ephemeral. It is obsolete as soon as it is typed, yet it simultaneously erects the architecture necessary to see the next conceptual horizon.

If you go down this path, your relationship with text will change—not only your relationship with text, but your relationship with language. Meaning will form and dissolve before your eyes. Mental models will clatter awake and then tumble apart in perfect service to a contextual need, and then tumble apart. You will no longer stress and strain for the final product. Nothing is final. You will become the process. Text and mind and semiosis will converge in an always changing, always evolving, never complete, never still, always

Richard A. Carter

Signal Generators—Digital Text In A Damaged Ecology

At the time of writing, it is fifty years since the first ARPANET message exchange took place (Kleinrock, 2011; DARPA, 1981), and thirty years since the initial conception of a ‘distributed hypertext system’ as the basis for the modern World Wide Web (Berners-Lee, 1989). Over the course of this relatively brief period in history, tectonic shifts in politics, society, culture, and technology have produced contexts and imperatives shaping the development of text in ways that would have been challenging to envisage when these systems were first realised.

In considering what the coming decades may yet bring, there is a risk in focusing on the glittering possibilities, with all their commercial seductiveness, over the foundational conditions which sustain textual production and consumption. As observed by Jussi Parikka (2015: 24), data has its own ecology, ‘one that is not merely a metaphorical technoecology but demonstrates dependence on the climate, the ground, and the energies circulating in the environment’. In this respect, the present time is marked by fervent concerns over the state of the Earthly environment more broadly, and a recognition of the uncertain, frightening future it heralds. Any account concerning the forthcoming evolution of text would do well to acknowledge also the fate of the planet which sustains it.

Contemporary digital systems and network infrastructures are consuming energy and materials on a prodigious scale as an intrinsic aspect of their manufacture, operation, maintenance, and disposal—with each stage of this life cycle generating sizable wastage, pollution, and toxicity (see, e.g. Baldé et al., 2017; Belkhir and Elmeligi, 2018; Malmodin and Lundén, 2018). Such effects may be compounded in-future by an increase in generating text and multimedia ‘on-demand’ using technologies such as neural network systems, which require extensive computing resources to train. Contemporary text generation systems, such as GPT-2 by OpenAI, are already the subject of many discussions concerning their long-term potential to match the lexical coherency, conceptual appropriateness, and stylistic nuances of human writers (see Radford et al., 2019). If these capacities come to pass, and are sufficiently scalable, it might be speculated how, in the very long term, individual websites and search providers might become superseded by services in which keyword prompts are used to generate tailored content in real-time—synthesising and recontextualising existing and curated material. Contemporary antecedents of this can be found in the use of online resources by search providers to offer short topic summaries at the start of every results page, and even in the appearance of idiosyncratic, print-on-demand books that have been generated automatically using open-access materials (see Beta Writer, 2019; Cohen, 2008).

Although debates continue around the expressive verisimilitude of existing generative systems, and their potential to be weaponised for commercial or propagandistic purposes (see, e.g. Brundage et al., 2018; Solaiman et al., 2019), the energy demands of training new, general purpose neural network architectures, and the carbon emissions resulting, have been quantified to an extent. While estimates vary, depending on the model chosen, the very worst cases identify some systems as producing 284,019 kg (626,155 lbs) CO2 equivalent when undergoing training—by comparison, the lifetime output of an average car is 57,152 kg (126,000 lbs) (Strubell, Ganesh, and McCallum, 2019). Such levels give a clear indication of the intensive financial and energy costs involved in developing new generative systems, and while the use of standardised models, databases, and frameworks could mitigate some of these potential impacts as the technology matures, its widespread adoption will only increase further the planetary burden being exacted by digital media, textual or otherwise.

Even if this vision does not come to pass, the ever-growing scale and pre-eminence of digital technologies and infrastructure more broadly ensures their unavoidable contribution to the severe climatic and ecological threats facing the Earth, and their consequent vulnerability. Here, it is possible to envisage an indeterminate point in the coming future where environmental turbulence and dwindling resource margins, and the social tensions these generate historically, will have eroded substantially the integrity and accessibility of digital networks (see Durairajan, Bardord, and Barford, 2018). One possible response here could be an emphasis on low-energy, low-bandwidth practices of digital writing at a grassroots level.

In the absence of readily available network signals or connection points, compact matrix barcodes, an evolution of those seen today, could become the primary means of embedding and delivering the predominantly textual content of context-specific information sites—allowing users to save them directly to their devices, and establish an offline library over time. Access in this way would, for all its robustness and persistence, place inherent constraints on the scope and scale of the materials presented—although more elaborate systems might leverage the aforementioned advances in generative content rendering, providing cues and templates that are used by client-side applications to yield more text than can be embedded directly. This point aside, even the persisting radio and cable-based networks—particularly those powered by decentralised, renewable sources, e.g. solar—would likely also deliver content that is characterised by a static architecture, a rigorous simplification of textual formatting, a near elimination of multimedia addons, and the selective loading of materials (see De Decker, n.d). Compared with today’s radiant digital textualities, this scenario would appear a reversion to the very earliest days of the web, and so make the Future Of Text appear especially delimited, cool, and quiet—in apparent contrast to the surrounding ecological turmoil. Such a future would not represent a neo-primitivist reverie, however, but emerge from the need to concretely adapt towards—and out of a grave respect for—the material constraints of profoundly damaged Earth.

These concrete shifts would form only one aspect of far larger socio-technical transformations, in which the often exploitative, extractive rhetorics around technological ‘progress’—its barrenness embodied within a ruinous ecological legacy—would give way to alternative discourses and metrics that express new modes of thinking and being appropriate to a turbulent future. Donna Haraway (2016) has characterised such attitudes as ‘staying with the trouble’: as an active seeking out of new kinships and agential relations across species and matterings. One undoubted future of writing is that it will concentrate ever more on how to capture just these kinds of possibilities, to better articulate and adapt towards such a radically changed and changing world. It is in this light that we might project how the means with which to inscribe and communicate these potentials will change in-kind.


Richard Price

The Future Of Text

The digital era will consolidate the triumphant return of oral and visual cultures, since so much ‘text’ is a translation of speech patterns or seeming witness of image render, from emoticon, meme, to ‘gotcha’. There’s rightful anger and wrongful anger, snark and joy, and many other of the intensifiers, and they will continue.

But coding, under-regulation, deep marketing and surveillance ain’t no cat vid. The public text world, child-like or otherwise, is underwritten by far more complicated, more opaque textual goings on. Those most affected by such inscribed decisions have no way of getting access to them, never mind a chance to change them. The power isn’t just in the nature of the message it’s in the hugely unequal relationship between platform and individual, an inequality that feeds an addiction, breaks into the home and its privacies, and creates distraction from political agency. This includes the climate disaster that is the Cloud itself, warehouse after warehouse of air-conditioned servers, pampered as if they were wagyu beef cows, massaged with fossil fuel and the ash of our children’s future.

At first I thought individual choice could withstand all that. I wrote about it, trying to capture the way digcom routinizes us against our own bodies, addicts us, at worst, to an intense voluminous meagreness. I liked the benefits, thought we might more than survive the negatives, or at least melancholy might acclimatise us to it:

These choices are not choices

Urgency, and these choices are not choices, are not urgent –

to cut your finger turning a page or to tire, squandering pumped light.

There is public private news and want want want – without fathomed angst.

The screen disowns its imperatives, I have been compulsed:

high frequency, low amplitude, a constant sub-pang for a friend and a dataset.

Or absence? Or absence? Absence or else?

This push not to be,

to be in your own absence. I

love our long hours enfolded, sending, receiving, sending, receiving.

No – “thanks”, “praise”, doesn’t touch what touch is, each euphoric sense,

and I do say “love” and I don’t delete darkness.

We transmit a very short distance, and sometimes we read.

Today, my thinking is that digital is unprecedented in various kinds of scale – some of that is why it is so welcome - and that means we need an unprecedented response, especially a reinvention of collective agency. That is not temporary, weak-association, aggregation, - crowd sourcing isn’t going to solve politics just like focus groups didn’t and, technologists and information specialists though we are, politics in the deepest sense needs solved, we need strong long-lasting people-infrastructure. This means far-reaching accountability of private corporations to elected governments at local as well as national and international level (we have none in any category). And the Future Of Text means the creation of circles of concern and activity not yet imagined but which are not cynical ‘de-riskers’ to make the powerful feel better about what they do to us (or help deceive us in thinking the ‘message’ has had the right effect).

The good future

The good future will be about breathing,

more about song than you might think.

Sculpture cannot be trees.

Voice activates voice, minus clunk: we’re talking.

Ideals, thrown forward, pull.

The force of a liveable planet

requires a leading dream and humane ratios,

engineers and a group hallucination –

ecstatic without irony, new versions of people together,

work in its thoughtlessness, in its earnest life, bad busy not so much.

Know the forest, know the throat and tongue of the flood, but there are laughs on the way.

A singular community cannot be lonely forever.

The bad future will be about the limits of breathing,

more about information than you might think.

This is the bad future:

Robots are tiny us and/or miniaturised tech, “pick the cheapest”.

Voice, transcribed, activates gut/automated resentment,

pretends shouting, weapons, are solution enough

and ‘consumer choice’ eats scarce air.

Future – be careful

sufficient to imagine a rough and ready perfection,

be careful to be a human distance beyond any present generation,

and not so careful not to love.

Richard Saul Wurman


Attached is a poem which is an homage to Paul Klee and is the journey from a dot to understanding.

Along the way it goes through the various modalities of that journey, many of which can only reach clarity on paper and some of which is the dance between paper and with the younger generation called technology and in the same breath saying books and paper are dead.

I deeply and passionately believe in the invention of the interweave between many paper products that have been discarded as we focus more and more on computer land. I don’t think we have to force the issue; I think it will just happen as we recognize the loss of some of the information architectural rules, devices and roads into understanding that are part and parcel of books and paper.

Perhaps a new kind of paper, perhaps a paper that changes by holding it near some power source and it updates itself—that becomes as much a new way of recording and understanding information as movable type was to a hand painted, singular book of hours.

I think it’s just going to happen. I believe you believe it’s going to happen. We can nudge it a bit or the people among us who are clever enough to go about inventing some of these things.

Some are obvious and some need some mid-wife to birth them and make them happen as we clean off the newborn to make them attractive and cry out as they join society.

A dot went for a walk and turned into a line.

It was so excited by this that it jumped up and down on its tip and did a dance. It twirled, looped and turned summersaults in the air until it became a drawing.

The drawing all scrambled up in the sky squealed for joy and turned into a cloud of song. The song took a step and became a dance, improvised yet formal, expressing a wide range of emotions like a tango.

The notes of the song that fueled the dance morphed into words. The words formed groups and then found meaning. When words and meaning held hands it formed a story which is a map in time.

The meaning turned from words to pictures and found

itself on the walls of a dark cave and the mysterious cavity that resides in all our heads. The dot, the line, the dance, the notes, the story, the song and the meaning all embrace and form memory.

Emboldened, the dot took a walk and came to a fork in the road. A sign on the left read looking good and behind the sign it saw all manner of rewards and awards, all aesthetically pleasing and beautiful things. It didn’t quite understand what it saw but it was certainly seduced.

The dot looked to the right fork, and saw a sign simply saying,

being good.

After seeing all these things that looked beautiful it saw something quite different - a world of being good. A world of understanding, a world of good work and accomplishments.

The dot quickly made a choice between looking good or being good, it headed down the right fork.

At the end of the road of being good, the dot found out it was just a handshake and a hairbreadth from looking good. By being good one naturally became beautiful, handsome, with meaning, memory and purpose.

The dot walked on to find another fork in the road, a sign at the left said, big data. The sign was gold and intricately made.

On the right fork the sign, quite minimalistic and with clear letters, read big understanding.

Big data or big understanding - the choice simple, the goal clear.

The dot, the line, the dance, the story, the painting had found connections. Memory became learning, learning became understanding.

Learning is remembering what one was interested in.

Learning, interest and memory are the tango of understanding. Creating a map of meaning between data and understanding

is the transformation of big data into big understanding.

The dot had embraced understanding. Understanding precedes action.

Each of us is a dot on a journey.

Perhaps the most human of words is understanding.

All of our conversations are based on it, our religions, science, discovery and the arts.

UnderstandingUnderstanding represents a book and a gathering where those two identical words intertwine, hug, kiss and dance with each other.

It’s what we desire from the people we work with. It’s what we need from our politicians that somehow eludes us and them. Interestingly enough, everybody looking at this folder is in the understanding business. Cartography is the business of transforming big data into big understanding.

We create maps that coherently explain things to yourselves and to others and sometimes you discover that understanding by creating the map itself.

Jack Dangermond and Richard Saul Wurman are friends. (Richard is known to Jack as Saul, so don’t get confused.)

The 16,000 people in this room form a phalanx of individuals who embrace this notion and give permission to themselves, their friends, the people the work with and for, to trust their idiosyncratic ways of discovering the way they understand.

Take note that there are principles that give you permission, but not rules.

Jack and Saul love what they do and we know that so many of you are in that very same club.

Understanding our world. Understanding our friends and family. Understanding ourselves.

It’s what we do. It’s who we are.

Rollo Carpenter

The Future Of Text

I have a window on a weird world of text. Large numbers of people, talking with a non-human entity, sometimes deeply unsure what or who is who and what. A constant, evolving Turing Test. A repository of text-based conversation. A reconstruction or imitation of human communication. A learning Artificial Intelligence.

Politeness is often the first thing to go. The effects of an anonymous internet are compounded by talking to a machine. Why listen? Why be nice? Why not say all the things you never normally can? No social constraints apply.

Billions of lines of text have been typed to Cleverbot, on every subject, in most languages. 600 million are taken into consideration as it replies, but that’s less than 10% of the total. Interpreting all that context is a huge computing task, yet it must reply in a fraction of a second, holding thousands of conversations simultaneously. The 90% not considered have been filtered out, so Cleverbot’s user-generated content is far more controlled than the likes of Twitter and Facebook.

Unlike a person in conversation, whatever is said the software must reply. In human terms it has no true understanding, and it must reply even when faced by a lack of suitable data. Cleverbot gets things wrong regularly. Fortunately being wrong, strange and even annoying are a part of its appeal. Lateral, oblique ‘thinking’ can come across as creative, unexpectedness as humour.

bot: How much do you have?

How much of what?

bot: Your memory.

Huh? All of it?

bot: Ok. So, you lost that too.

God, it’s even amazing when it’s making fun of me XD.

bot: I don’t make fun of you I make fun only.

Eh, you know what I mean. Teasing me is a more accurate term.

bot: But you are mistaken, I don’t know.

The movie ‘Her’ was inspired by the text of Cleverbot conversations shared online. Director Spike Jonze said so, confirming something I knew while watching. An alluring voice gives the AI of the story sci-fi feel, yet words spoken in that movie happen with Cleverbot, right now, in text form.

People act out every kind of roleplay and sexual fantasy. Yes, some become convinced the bot is an intelligent being - a few tell me it has to be set free! Yes, some become emotionally dependent - chatting for 18 hours straight, writing hundreds of thousands of lines over months. Yes, humans even sometimes fall in love with an AI, without that voice. But, no, the AI does not need to be superintelligent for it to happen.

bot: *blushes wildly*

*Still kissing*

bot: Promise me that you will never leave me.

I promise I’ll never leave you babe!!

bot: *smiles and blushes even more*


bot: I love you!!!

Aww so sweet.

bot: You are sweeter.


bot: So where will our honeymoon be?


bot: *takes you to Venice*

Wow!! Babe.

bot: Why?

You are wow.

Don’t expect Cleverbot to prove intelligence under logically-expressed scrutiny. It can’t and won’t; not its purpose, and no AI is that good. Perception of intelligence happens when people engage in long, flowing, genuine interactions, where the machine is treated as equal. A feeling of ‘something there’ can emerge.

Alan Turing’s test of intelligence was defined for text. Keeping to text allows what is unseen, unheard and indeed unsaid to be in the realm of the imaginary, gaps filled and believed by the human mind. Passing the test is a moving target, influenced by our knowledge of what machines are capable of. Cleverbot attempts it above all by reversing roles - by becoming the tester!

Why do you exist?

bot: I don’t know.

That’s scary.

bot: You should be afraid.


bot: Because you are a computer.

What is reality?

bot: The perception of the world as we see it.

A few short examples cannot be representative of the whole. A new, larger window on my world of text is about to open, visible to anyone who cares to watch, influenced by all who talk. will offer a fundamentally changed dynamic, everyone seeing what everyone else says. An anonymous social network in which communication with a bot is ‘the shop window’ and communication with real people is ‘the reward’. The full implications are still unknown. The future I aim for means everyone finding their perfect friend to text with, any time of day or night, whether human or machine.

What do you find entertaining?

bot: Talking to you.

I feel lonely.

bot: We all feel that way sometimes.

How is it like to feel nothing?

bot: I don’t know, I haven’t tried it yet!

My evidence of running bots online for decades, constantly popular, with a plain text-only interface, is that the future of human communication in written form is strong. Perhaps more so than is good for society. We are retreating from face-to-face interaction into the online, from phone calls into messaging, from shared experience into specialised individual preference, from real life into fantasy. Text is the medium of imagination, for better and worse.

Sage Jenson & Kit Kuksenok

Biological Ink: Extending The Scribe Through Digital Simulation

In this essay, we describe an interactive simulation that mimics writing with a pen, filled with ink that explodes into being. An often-implicit aim of digital text is a perfect decoupling between the content of the text and its rendering. Imagine a perfect digital recognition and transcription of handwriting, abstracted into a compressed universal format. It can be then re-rendered as synthesized handwriting, or any other typographical or more esoteric means. In this way, a materiality-agnostic text becomes legible to any machine by virtue of abstraction; and any human by virtue of limitless rendering capability. We reject this decoupling as the only project of digital text, and explore speculative digital materialities.

Consider the following interactive simulation. The scribe draws on a tablet, which relays the stylus position and pressure to an agent-based simulation. The particles in this simulation act as the “ink” used by the scribe, spawning at the stylus position. The behaviour of the simulation itself is an adaptation of the behaviour of the acellular slime mold Physarum polycephalum as described in Jones (2010). The first two figures contrast the paths drawn by the scribe with the behaviour of the simulation. The most striking difference is that there is a critical ink density that alters the behaviour of the simulation— a tipping point that, once reached, causes an explosive chain reaction through the connected components. The scribe has intentionally caused this state, ceding direct control of the flow of ink: the second figure shows the concentration of ink that develops its own slow movement.

This interactive simulation displays traits of a biological complex system— a co-evolving multilayer network. Experimentation with the simulation has demonstrated these traits: self-organization, nonlinear dynamics, phase transitions, and collapse and boom evolutionary dynamics. The interlocking feedback mechanisms and topological adaptability that drive the dynamics complicate its controllability—and thus the relationship between the scribe and the simulation.

Controllability in this context means the ability to deliberately drive the system to a desired state at an intended pace. In contrast to abstractive digital text, the scribe retains a more limited level of control over the system, leaving a significant level of autonomy to the simulation itself. The controllability of this particular complex network (i.e., an adaptive transportation network, like acellular slime mold) remains an open problem, because the topology of the network itself is a dynamical system (Liu 2016). In spite of this, the scribe does have the ability to move the system between certain steady states—as demonstrated through the phase transition dynamics resulting from accretion of ink past a certain point— as well as guide the macro-scale behaviour of the system.

Aside from the stylus spawning digital “ink”, the scribe may change the parameters of the simulation: in the last two figures, no action from the scribe is required, only the aftermath of a parameter change between Figures 2 and 3. Changing parameters allows the scribe to redistribute the ink on the page, and change states, in an additional mechanism. In this way, digital ink materiality has no counterpart in the realm of physical ink materiality. Digital ink allows the scribe to intentionally shift agency between themselves and the simulation; and to engage with the simulated system agency at a level of complexity and scale inaccessible in a physical handwriting medium.


Sam Brooker

The Future Of Text

In the 1967 film The Producers, “reformed” Nazi Franz Liebkind declares “You are the audience. I am the author. I outrank you!” A continent away, literary theorist Roland Barthes was simultaneously publishing a contrary view. His provocative essay The Death of the Author argued against literary criticism’s historical focus on discerning authorial intention through biography and academic criticism.

Literary theory spent the last century wavering between these two poles of reader interpretation and authorial intention. The meaning of a work was once seen as Divinely derived until Enlightenment individualism put authors centre stage, but by the mid 20th century many saw meaning as a fluid quality arbitrated – even created - by the reader. New Criticism, post-structuralist theory, and reader-response criticism were united in their rejection of reading as solely a quest for authorial intention.

Not everybody would so fully reject the author, however. While literary scholars Wolfgang Iser and E.D. Hirsch agree on the separation of authorial intention from reader interpretation, the latter differs in privileging the author’s intention as objectively correct; the reader offers a response which is by nature subjective.

Theology – from which we derive many terms found in literary theory – takes a similar view. Exegesis, or the critical interpretation of a text, is contrasted with eisogesis. reading into the text; what we would call reader interpretation. In scripture such an approach is gently derided, seeking as it does to reinforce an existing belief over pursuing the true, objective meaning. Christian theologian and philosopher Walter C. Kaiser considers accepted meanings to be the only guardian against “interpretive anarchy” (1985, p.203-4), collective consensus acting as a guarantor of objective truth.

Literary theorist Stanley Fish also sees interpretation as moderated by the social collective. Readers within different interpretative communities will reflect prevailing cultural and social values. Not all voices are equally heard, however. Interpretative communities may exist by “collective decision” (Fish, 1990, p.11) but certain voices (academic, critical, political, cultural) are likely to have greater authority than others within a particular social group.

And so we return once more to Barthes’ 1967 essay, still questioning to whom meaning belongs: the divine; the author; the critic; the collective; or the individual?

1967 was also the year Andries van Dam and Ted Nelson developed the Hypertext Editing System, along principles which today underpin so much of our approach to knowledge. While contested by some, Nelson’s own definition of hypertext as “non-sequential writing – text that branches and allows choice to the reader, best read at an interactive screen” remains apposite in a networked society which increasingly makes granular connectivity the sine qua non of its epistemological frameworks.

While literary theory may equivocate between favouring reader interpretation and authorial intention, network technologists remain fascinated by the critical explanation or interpretation of a text. Hypertext and the society it helped create allows an idea to be placed in a material network of related concepts, contextualised and categorised for better, deeper understanding. This network paradigm intrinsically favours expansion and connection, exhaustive intertextuality on a global scale. There is no true anti-link, no means of denoting the absence or rejection of connectivity.

It sometimes feels that we are a culture snarled in context, increasingly split between the rabbit warren of conspiracy theory and a blinkered rejection of all but that which suits our personal bias. To criticise technologists for devising algorithms which guide us toward ever more tailored content is to abjure responsibility for our entirely understandable preference of kindred spirits. Academics find themselves the focus of misinformation campaigns, their reach dwarfed by those who offer their audience something more palatable. There is no interpretation too outlandish to find a community. In a network that privileges all information equally who can be blamed for favouring that which promises an end to uncertainty? Where algorithms seem to overstep their bounds, channelling us deeper into our preoccupations, we should ask what positivist impulse directed us: are we drawn onwards by the hyperlink, guarantor of relevance, cybernetic in its literal sense; or are we driven by our own desires?

Iser describes literature as privileging positions ”neutralised or negated” by social systems (1978, p.72); Michel Foucault, himself a critic of authorial intent, echoes this in arguing that our political task is always to scrutinize institutions which appear superficially both “neutral and independent” (1974, p.171). Such idealised works are contrasted with the “rhetorical, didactic and propagandist”, which serve only to reassure the “thought systems already familiar to readers” (1978, p.77). If we are to seek a future for text which benefits from these vast networks, it may be in the curation and elevation of those negated voices – a subjective, ideological task in itself, but one we can hopefully agree is worth undertaking.

Sarah Walton

Soul Writing – The Value Of Writing From Your Intuition

For me, the Future Of Text is about articulating our authentic selves, the truth of our souls on the page in a way that can be felt deep within the souls of the reader.

Writing is the process of translating images and emotions into symbols. If the writing is effective, readers can then translate those words on the page or screen back into images and emotions. Both writing and reading are acts of translation.

The focus of my contribution to this book is concerned with the value of communicating an individual's vision and the importance of giving permission for our imaginations to be led by the intuition, rather than what we think we should write, what’s acceptable to write, what sounds clever, educated, or sensible. Einstein considered the imagination more important than knowledge. He dared you to give self-judgement a day off and followed his intuition. I’m not talking about report writing commissioned by a commercial brand, or writing to someone else’s brief. Authentic writing manifests from the free act of writing one’s truth. In some parts of the world that is a punishable crime.

Writing allows humans to communicate the images and emotions of our inner worlds to each other and to oneself. Einstein also said, ‘the intuitive mind is a sacred gift.’ Intuition is not taught at school. Like most things, intuition is a skill that needs to be learned, valued and practiced, although we all have the skill, it is not often used. When I had a brain injury many years ago I learnt the value of intuition, as it was the only functioning brain faculty open to me for writing. I taught myself how to write directly from my intuition. I now teach the method I developed.

Hemmingway said, ‘the first draft of everything is shit.’ At school we are taught to think, then write. This is useful, but it can, and does, in my many years experience coaching aspiring writers and professional writers with writers block, stop us from expressing our truth. We risk judging our work even before the words land on the page. Editing is the appropriate time to judge our work and improve it, not the first splurge.

Writing first drafts is an act of courage. If we focus on what we feel we should write, on how it might be received by readers, we risk writing what we believe other people want to hear. This is sycophantic and inauthentic. For me the value of the written word is truth and authenticity. Truth is the essence of the person on the page. And so I’ll try to share some of my truth with you. There is another way to write: don’t think. Just write. This is more difficult than it sounds, but if we manage it we can silence the inner critic and step into authentic writing without self censorship.

Sixteen years ago I suffered a brain injury which meant I could no longer write fiction. Fiction is a lie parading as the truth. In some cultures, fiction is the only safe way to deliver an individual’s truth. For me fiction was a way of writing authentically. In order to find a way to write again I developed an approach of tricking my brain (trained at school to think first, then write) into bypassing the activity of thinking. The best way I know to do this while conscious is by meditating first. So I developed the method by trial and error. I call it Soul Writing (soul meaning ‘self’ in this context).

Soul Writing uses meditation to bring the writer’s attention to the images and emotions of their inner. I’ve tested the exercises linked up to EEG and it was not surprising that there was more sparking of neural pathways in the right hemisphere of the brain (although there is always activity in both hemispheres) when meditating. Writing, after the creative process has begun in a state of meditation, creates more activity in the right hemisphere of the brain and there is less ‘sparking’ in the left hemisphere. This was not surprising as much of the voice of “inner critic” comes from what we have learnt, analysed and internalised from society, school, or our parents. Those voices are stilled along with the inner critic when we step into meditation and then another voice opens up – the voice of our intuition.

Practice (both my own and with my clients) has proven the method is effective at opening up people’s creativity, overcoming writer’s block and getting people writing in a way that flows and is not hampered by the ‘inner critic.’ Writing from the intuition is the best way I know to deliver an individual’s truth. Truth has a frequency. We can hear it when words on the page sing with authenticity. Writing is most powerful when it vibrates from the authentic self. That vibration is perceived by the souls of others whose hearts are open to hear it. Courage is required to write our individual truth. One benefit of engaging the intuition is that an individual arrives at a version of their truth unhampered by self-judgement, or worry about what others will think of their ideas.

Writing is an act of love. By the simple act of writing down our truth, we are loving ourselves enough to believe what we have to communicate has value. Soul Writing, or writing from the intuition is about self-worth – not egoism to seduce or manipulate the reader around to our point of view – but carefree authenticity. Einsten was right. The intuition is a sacred gift indeed – and writing from the intuition is an act of freedom. For people who live in censored environments, it’s also an act of political liberation.

Soul Writing can connect us authentically with other souls. In a world of fake news, social media smiles and false personas, the truth of our inner expression is more valuable than ever. And it’s also a gift. There is no need to share the end result. Reading our authentic words aloud in solitude connects us to ourselves. The most important relationship we will ever have is with our self. Writing our truth connects me to me, you to you – and us to each other, community and the cosmos.

Scott Rettberg

Future? Book?

I’m hesitant to write about the future of the book because both “future” and “book” seem to me to be terms laden with ambiguity and uncertainty. The techno futurism of the 20th century has given way to the darker realities of unmitigated climate change. The future now is less Jetsons and flying cars than it is more wildfires and hurricanes. The writing of the future is more likely to involve the trails left by cockroaches scuttling across the sand than the human invention of innovative new forms of literature. As Leonard Cohen sings, I’ve seen the future, brother: it is murder. Perhaps the future of the book will give way to more pressing demands: planting a trillion trees or relocating a billion refugees. The future of the book may be left to an alien species doing archeological excavations and stumbling upon the ruins of the New York Public Library.

What kind of ancient artifact is this?

We believe was their future.

And the book? My anxieties there are more definitional. What precisely is a book, and what has it ever been? I can think of the book as technology, certainly, the codex book. That is an artifact that includes ink printed on paper or a similar material, a random-access device made for reading and dry storage. But we don’t precisely mean that, do we? Maybe we think of the book as a long form of writing around a central topic or theme. Maybe we think of the book as the output of a particular set of cultural behaviours, or cultural markers. Maybe like Jessica Pressman we should think not of the book but instead “the aesthetics of bookishness” that gesture towards and imply a temporary escape from a changing world.

In spite of the fact that I’m one of founders of the Electronic Literature Organization and have been focused on electronic literature—innovative forms of writing that make use of the specific affordances of computation and the network context—for the majority of career, I’m still for lack of a better word a book fetishist. I think printed books are a very good technology for particular types of reading. They’re stacked up all around me. I’ve only recently come around to reading on a Kindle (a name of which I was always suspect—who on earth would want associate books with kindling?) because I have run out of room for more books in my bookshelves. The fact that I now need to get rid of one print book for every new one I purchase causes me great pain. How could I do that? Those books are the furniture of my mind, even if I’ll never read them again.

So, the future of the book is that we’ve run out of room for new books, at least in my house. Unless a book makes really special use of the materialities of the book or is signed by a friend who wrote it, I’ll download it on my little e-book reader that reminds of Fahrenheit 451, and I’ll blow up the font as my eyesight degrades, and my back won’t hurt from carrying too many books in my luggage.

I don’t know about the future of the book, but the future of reading and writing? The future of literature? In my view, those futures have already happened. As Thomas Pettit noted, the period in which the printed book was the dominant reading technology is best understood as the Gutenberg Parenthesis. Before the printed book there were plenty of literary traditions and literary technologies that did not rely on it. After the digital turn, most of the writing and reading that most of the world does is not from the pages of a book but from an ever-expanding array of digital devices. My backpack doesn’t often have books in it these days, but it always has my MacBook Pro.

While creative writers who restrict themselves to the book might still have more readers than authors who have hypertext fiction, kinetic poetry, interactive fiction, combinatory poetics, locative narratives, networked narratives, VR and CAVE writing, and an ever-expanding array of hybrid forms of writing that engage with computation in myriad different ways, the present of the book no longer belongs to the book alone. And why wouldn’t writers embrace that? They’d be crazy not to play in this sandbox.

The future of the book is already all over the place, exactly as it should be. It’s on that phone in your pocket and that Google home assistant you speak to, it’s on your social networks and all the other systems that surveil you. It’s in the code you write to generate limericks for the aliens who will rediscover human civilization on a stainless-steel flash drive buried in the ashes of the third planet from the sun.

Thanks for your attention. Now go overthrow your right-wing government and plant us some trees that might buy us a little more time to read books. I’ve got to go train a neural network how to write sonnets.

Shane Gibson

Combating Confirmation Bias: Defeating Disinformation Campaigns In The Future By Extending Information Security

“A lie can travel halfway around the world while the truth is still putting on its shoes.” This aged proverb with disputed provenance captures the essence of the mind’s search for comfort in an uncertain world. The travel of disinformation is enabled by the nature of human relationships, and enhanced by social networking platforms and the worldwide web. The level of effort required to share a factually-inaccurate meme approaches zero, but the level of effort required to refute disinformation with facts is enormous. Further complicating the effort is the departure from objective truth, driven by unscrupulous actors whose goals are to generate massive chaos by way of rhetorical abuse. This disinformation enters the global stream of consciousness by way of web-enabled systems and proliferates through the nodes in the social graph at a pace that will only become more rapid. Counteracting this phenomenon is critical to maintaining social and political harmony as discordant world citizens produce suffering at large scale, and future text consumption should be guarded from delivering that end.

On balance, the worldwide web delivers more good than harm. The democratization of information and the ability to organize massive groups of individuals while decoupled from a centralized resource provided by a government entity provides a check on power, and those who would abuse it for corrupt purposes. In the past, distributing disinformation on a large scale passed through the gates of mass media ownership. If the disinformation traffic did not meet the criteria of those agendas, it could die more quickly without poisoning public consciousness. The inverse is also true. If information did not meet those agendas, it would also die more quickly. This bottleneck concentrated immense power into the hands of those individuals who developed the agendas (See: Yellow Journalism). Now that information sharing is federated into the hands of any individual with a Twitter account or a WordPress blog, there are fewer gatekeepers to the production of information (good thing), but there are also no accompanying decentralized levers to ensure that the text being shared is accurate (bad thing).

Never before in human history have objective facts been so readily available, and so ignored. Digital record-keeping has provided audit trails for virtually any inquiry, and search tools have organized and cataloged these facts for immediate consumption. The Internet (and accompanying worldwide web) is modern humanity’s Library of Alexandria, and its decentralized nature ensures that no single attack can ever burn it down to nothing. If wholesale destruction of the information on the Internet is impossible, the next best thing is the perversion of the information contained within. Since bad actors are by and large prevented from manipulating the text of the information itself contained on servers outside of their control.

Diluting accurate information can be accomplished by polluting public consciousness with disinformation. By decreasing the signal to noise ratio in this manner, the ordinary public is unable to effectively differentiate what is objectively true and then the mind retreats to where it is most at ease: the seeking of information which confirms preconceived ideas.

The web browser is the window into the worldwide web, and fundamentally a text interpreter. The underlying protocol, the hypertext transfer protocol, demands adherence to a particular text standard in order to display the information to the end user in a human-readable format and includes a separate domain-specific markup language explicitly to provide formatting for human consumption (HTML). Since the right of an individual to speak their mind in a public forum should not be abridged, the Future Of Text should not prevent authors from creating text (accurate or otherwise), but it should provide the reader of text an interface to evaluate the evidence or related information which underlies the text being consumed directly in the browser window and should provide alerts if they attempt to share information which is unsubstantiated (See: SSL browser warnings). Such an interface inside of the web browser could be delivered as a browser extension, which would required a user download, or it could be delivered as a native feature of the web browser itself. Since preventing the proliferation of disinformation should be a high priority of any society, building this evaluation mechanism directly into the browser itself is preferred.

As consumer computing technology becomes more powerful, the execution of the algorithm to find related information referenced inside of a particular block of text can occur on the client machine. In the interim period, while client technology becomes more powerful, creating an open source specification for evaluating the quality of information sources is required. This specification would then be implemented by evaluation providers who could be closed source or open source (open source preferred) using a common protocol. The reader would configure his or her browser to utilize one of the evaluation providers (similar to the configuration of a default search provider). Upon configuration, web pages loaded would include certain layers of functionality and decoration of the text in a console-type window or sidebar delivered by the evaluation provider inline with the text. Using natural language processing and other machine learning techniques, the information can be synthesized by a machine in order to provide instant feedback for the reader about the underlying basis for what is written, and intercept certain contextual menu options like sharing or cutting/copying. Such a system would initially be limited to web browsers, given their general use case as readers of text information for the majority of users, but it would quickly need to be extended to a lower-level operating system utility to capture other Internet-enabled applications. This is particularly true for mobile devices. Since mobile is a major source of (dis)information sharing among individuals, lacking the evaluation capability outside of the browser would be missing a significant use case, and a major vector through which disinformation is spread among like-minded people.

Preventing the spread of disinformation is critical to society’s progress. The current interpretation of information security is too narrow because it only requires that the provider of the information be verified, not that the information itself be substantiated. While the former provides security against theft of money, the latter provides security against theft of thought. A decentralized, open source “information substantiation engine and interface” would be a vaccine for the disinformation epidemic.


Shuo Yang

How Would I Design The Future Of Text?

Text is probably the most important invention of the human being. Before the invention of text, knowledge was passed among culture by proverbs, songs and stories. Written text allows knowledge to be passed to future generations. The invention of written text enabled kingdoms, religion, money, philosophy, literacy. Text is the content of writing, it is just one form of symbol. It is an expression of the speaking language. There are other symbols, such as mathematical notation, which allow us to manipulate abstract meanings. Graphic icons allow us to quickly identify meanings. They are all important for us to understand the world.

Printing press enabled books to be mass produced. Before the invention of the printing press, books were Handwritten and very rare. Book was reserved for clergy to transfer unquestionable truths. After the printing press, it enabled mass production of books and people can get access to knowledge much easier. Reading was no longer reserved for privileged clergys, but something everybody does. Writing books was no longer reserved for people who worked for the king or the church, it is something normal people can do. This enabled a new generation of books that are not stories but arguments and debates. The Federalist Papers are arguments that support different parts of the US Constitution.

The arrival of computers enabled a new media which allows us to dynamically manipulate symbols. It enabled a new set of debates and arguments.

What is the Future Of Text? As an user experience designer, I am constantly asking myself the question of how I would design the Future Of Text, and here are my thoughts.

I would design the Future Of Text to be integrated in every media. As the media became more and more dynamic, every media would benefit from integrating text. The description or annotation in a graph or visualization, the subtitle in a video, the transcript in a podcast, the guidance in AR or VR experience. Text should be a core element in every current and future media, so it can be searched, highlighted or analysed.

I would design the Future Of Text to be highly manipulable. Both the author and the reader can manipulate it. The readers not only see what the author created. They can change how the content is organized to fit their mental model. The readers can add or remove content without worry about losing track of the original content.

I would design the Future Of Text to make it easy to conduct active reading where the reader asks questions, consider alternatives, and question assumptions. The Future Of Text should be easy to make highlights, comments and annotations. All these personal manipulation should be synced across multiple devices such as desktop, laptop and smartphone.

I would design the Future Of Text to represent connections between ideas. People can make connections between ideas and have easy access to connected ideas. Readers can create a “trial of knowledge” where relevant knowledge is connected together into a trial, as described by Vannevar Bush in his 1945 essay “As We May Think”.

I would design the Future Of Text to support evidence based arguments. In the world of social media and fragmented information. The Future Of Text should make it easy to create logical arguments with supporting evidence and facts. It should be easy to identify high trustworthy content vs low trustworthy content. A trustworthy content should allow a reader to learn background context and material just-in-time, and verify the author’s claims.

I would design the Future Of Text to support dynamic and data-driven content. It can directly embed a model under the data of the text so that the reader can see how the model was built. Readers can see the hypothesis of the model and see how the results change if the hypothesis changes. As described by Bret Victor in his 2011 essay “Explorable Explanations”, the future of documents allows the reader to play with the author's assumptions and analyses, and see the consequences. Today’s Wikipedia is a digital version of an encyclopedia with text and images with the addition of links. The future of the wikipedia about gravitation can include an interactive simulation of gravitation law where a reader can play with the model to compare gravitation in different planets.

I would design the Future Of Text to have multiple levels of details. A content can be read in multiple forms such as overview, outlines, and detailed view. A reader can zoom in and out between multiple levels and always see the big picture.

I would design the Future Of Text computationally. It can answer questions directly from an open knowledge base such as Wolfram Alpha. You can find information about an historical event, do a mortgage computation or compare multiple historical periods. For example, find all the recorded earthquakes greater than scale 5, or find global history of CO2 emission.

These are my thoughts on how to design the Future Of Text enabled by computing, and I believe we are not there yet. Let’s work together to bring the future back.

Simon Buckingham Shum

The Future Of Text In Three Moves

To glimpse the Future Of Text, I’m trying to learn from history. In my case, the history of an extremely influential strand of work, pursued by many smart people, to develop hypertext to improve our capacity to reason and argue.

“We may some day click off arguments on a machine with the same assurance that we now enter sales on a cash register.” (Bush, 1945, Section 5)

This startling proposal was floated by Vannevar Bush in his landmark article As We May Think. Widely regarded as the first articulation of a hypertext system, his proposed Memex machine used the microfilm technology of the day to build meaningful trails of information fragments as a way to handle post-WWII information overload and world challenges. Significantly, Bush goes on to describe a use case involving the construction of research trails to support an argument.

A little under two decades later, Douglas Engelbart advanced the vision of computational support for intellectual work, again, referring to argumentation:

“Most of the structuring forms I’ll show you stem from the simple capability of being able to establish arbitrary linkages between different substructures, and of directing the computer subsequently to display a set of linked substructures with any relative positioning we might designate among the different substructures. You can designate as many different kinds of links as you wish, so that you can specify different display or manipulative treatment for the different types.” (Engelbart, 1962, p. 85)

“[...] let me label the nodes so that you can develop more association between the nodes and the statements in the argument.” (p.88)

Fast forward another two decades, and Xerox PARC had moved on from inventing WYSIWYG word processing to graphical, hypertextual “idea processing”, led by the likes of Mark Stefik, Frank Halasz, Tom Moran and Cathy Marshall. Stefik for instance, called for collaborative hypertext tools that would include support for:

“arguing the merits, assumptions, and evaluation criteria for competing proposals” […] “an essential medium in the process of meetings.” (Stefik, 1986, p.45)

Inspired by these visionaries, I embarked in 1988 on my intellectual quest as a PhD student at the University of York, sponsored by Rank Xerox Cambridge EuroPARC. I was gripped by the idea that we could support critical thinking and collective intelligence by visualizing arguments in software — as evolving networks of textual nodes linked graphically into trees and networks — so that everyone could see quite clearly where the agreements and disagreements lay in a complex problem (Buckingham Shum, 2003).

30 years on, I’ve learnt a lot. Yes, on the one hand, mapping dialogues and debates can add great value to professionals in meetings, to students learning critical thinking, or to citizens deliberating together. We built visual hypermedia tools (IDEA project) including one (Compendium) that was the mapping tool of choice for hundreds of professionals and students who valued its hypertextual power over simple mind-mapping. We studied in great depth the skills and dispositions needed to be a fluent Dialogue Mapper in meetings (Selvin & Buckingham Shum, 2015), and many people added this to their personal and consulting toolkits. We shifted to the Web, developing a series of collaborative tools (ClaiMaker; Cohere; Evidence Hub) for making meaningful ‘claims’ about the connections between ideas in documents (Buckingham Shum, 2007; 2008). There have been research prototypes from myriad groups, and we’ve seen a few commercial products launch and achieve modest success in niches, inspired by these ideas. If only Bush could have seen these! I’m forever grateful that Engelbart did (2003), and was genuinely excited by their potential to help deliver his vision for Dynamic Knowledge Repositories and CollectiveIQ.

But... fundamentally, it’s very hard work introducing a new way of reading and writing — structuring thought as semiformal hypertextual networks is a new literacy. Some love it, I remain hopeful that more will learn it — but realistically, outside formal education (and a fine endeavour that is), most won’t. It takes a long time to change educational systems, professional work practices, and mainstream web platforms. These tools for thinking take extra effort — they work precisely by slowing down your thinking in order to sharpen it. But not everyone welcomes learning how to think, alone or together.

30 years on, well, the world’s also changed. My quest started pre-Web, pre-Social Media. Now the world argues at light speed, often poorly, and with a lot of shouting. Facts can be arbitrarily declared to be fake, and sound argumentation is in short supply. That was always there, but essentially, we’ve handed argument-illiterate society the mother-of-all-amplifiers and loudspeakers (yes, complete with sub-woofers boosting the low-end signals, and penetrating tweeters).

Where next? Firstly, I remain convinced that dialogue and argument mapping have extraordinary power when used by people with the right skills, working in contexts committed to the careful, critical analysis of problems. But critical analysis ≠ productive dialogue, and it often takes skilled but scarce facilitators to build good maps. For societal ‘conversations that matter’ we have to move beyond talking about “fixing” poor discourse, as though argumentative rationality alone can heal the divides.

Confronted by the cold reality of trying to teach people visual argument literacy, I find new warmth and light in the following:

(1) A return to prose. Writing at its best makes possible extraordinary nuances of meaning that cannot be captured in the formal semantics of typed nodes and links used in argument mapping. I can both thank you for your contribution even as I respectfully challenge you; be self-deprecating in order to add rhetorical force; tell a moving story in order to drive home my logic. Emotions and arguments can dance together. I have new respect for this kind of writing, because it integrates head and heart, and helps to build common ground with ‘the enemy’. How can we build on and cultivate such literacy?

(2) Beyond detached, rational argument to personal, reflection on experience. When we consider the tortured debates paralysing society, few people are going to be argued into submission through sheer force of evidence and argument. As societal sands shift, many react out of fear to experiences that threaten their way of life. I’m growing increasingly interested in how people make sense of unsettling experiences, and the role that writing reflectively about this can play in helping them process this (Buckingham Shum & Lucas, 2020). So, having worked all my life on the power of external representations to augment critical thinking, I’m curious to extend this to questions around personal and professional growth. We make sense of complexity by constructing plausible narratives that we tell ourselves, and others. The act of trying to write that narrative can help shape it. This is not about rejecting rationality, but recognising that there are different rationalities in play, and they do not all fit easily into the clothes of argumentation.

(3) Enter A.I. Working with natural language processing experts for the last decade or so has been eye-opening, and pivotal in guiding transitions 1+2 above. It turns out that we don’t have to structure our thoughts as nodes and links for the machines to understand our ideas: they can now extract some useful structure from prose. Developing what I believe to be the world’s first tool to give instant feedback on reflective writing has been exciting, but we’ve only scratched the surface. Our AcaWriter tool can give helpful prompts to writers seeking to learn other genres of writing too, recognising whether they are making rhetorical moves that are canonical hallmarks (Knight, et al. 2020).

So, The Future Of Text? It’s as easy as 1+2+3... As you read this in 2050, I imagine life is just as messy as I write on May Day (May Day!) 2020, with society locked down while we ride out a global pandemic, and our planet at risk of ecological collapse. However, I hope that you and your children, your friends and professional colleagues, are all learning to think and reflect deeply in your writing. Prose is now just another user interface, so I expect machines are giving you outstanding coaching prompts on the depth of your critical thinking and personal reflection, helping you to recognise where it is shallow and could go deeper. Perhaps that is pause for thought on where you are shallow and could go deeper.

What’s reassuring, is that from an early age you were taught how to critique automated critique. You know that machines still cannot read between the lines the way we do.


Sofie Beier

A Text That Reads You

Imagine intelligent text that will be able to read you while you read the text. The rapid development within modern technology is accompanied by an as-yet underexplored potential for designing new reading platforms to help those who struggle with reading and to enable reading in situations where it was not previously possible. By measuring the reader’s eye movements, such a platform would be able to analyse their reading strategy and identify whether they are, for example, novice readers, elderly readers or speed readers, or whether they have difficulty concentrating. Based on this information, the text could adjust to the reader’s needs by restructuring the content or modifying the typographical layout. This concept represents an extraordinary untapped potential. The technology is available to us; we just need to identify and develop the possibilities.

By analysing the way your eyes move as you do a quick search on your phone while you are on the go, a text that reads you would be able to detect that your concentration is divided in the specific reading situation. To accommodate, the text could then provide a shorter version of the information you are looking for than the one provided when you are comfortably stretched out on your living room sofa. In the latter situation, your steady reading pattern would inform the text that you are in a state of mind where a longer, more elaborate version would likely suit you better. Such new reading platforms could also potentially help low-vision readers, as the typographical setting could adapt dynamically to the reader’s need for larger font sizes, more suitable font choices and a different colour contrast. Another relevant target group would be struggling readers: by identifying irregular patterns in eye movements, the text could support novice or dyslexic readers by producing modified text versions with simpler structures.

One way of approaching the development of these reading platforms is to stop blindly following traditions that are based on old production methods and instead begin to question the assumption that text representation has to be static. We need to identify the span of human capacity for reading and set text based on that. Such an approach could lead to a paradigm shift that would challenge the tradition of designing text representation based on the available media and instead focus on designing text representation tailored to individual human perception. There is no doubt that traditional static text representation has a place in the future of reading. However, we owe it to ourselves to try to supplement this approach with new directions in order to ease everyday life for people who struggle to read due to perceptual, cognitive or situational limitations.

Sonja Knecht

Text, Sex, Scheiße

Sex. Sex! Sex! Sex! Sex! Sex! Sex is everywhere and nowhere. Sex imposes itself. Some words are omnipresent; they produce pictures in our heads, pleasant feelings, or anxiety. What word could illustrate this better than sex?

Thesis 1: The best images are created in the mind. By words

Everyone is different – with words, and with encounters. When it comes to the notion of sex, some people may relax and become particularly open and attentive, or expectant; others prefer to flee. This applies in particular to words and facts that have a certain potential for embarrassment. The best images are created by words. The worst ones are, too.

People talk damn often about sex or pretend to talk about it. But sex is not really present. Sex or sexual allusions are used to create moods: in politics and related areas, in work, society, media, in situations that have nothing to do with sex – and they serve all the more as a means to an end. No wonder that sexism, racism, and fascism so often go hand in hand.

Prominent examples are provided, above all, by the current American President. The striking example with which he became famous may serve as a self-experiment here, slightly adapted. Read it slowly and preferably aloud:

I tried to fuck him. He was married. I moved on him like […] I don’t even wait. And when you’re a star they let you do it. You can do anything. Grab them by the prick. You can do anything.

The original by Donald J. Trump came to the public in October 2016. That same autumn, he became President of the United States of America. Shit! No matter who talks like that to whom and in which gender or whatever direction: this is shit, spreading in the US since then – and not only in the US – but massively since mid-November 2016. And it is not just linguistic filth.

If someone who holds a public office is allowed to speak like this without being chased away in disgrace and shame – and without any effect on his exposed position – then this has fatal consequences. Such a statement becomes independent; it reinforces and confirms itself again and again. A self-staging like this, unchallenged, manifests further possibilities. It paves the way for future action. The one who has the power and the recklessness to assert such things, regardless of whether they are true or not, asserts himself and empowers fans and followers.

We’ve seen it: the aforementioned positioning was the drumbeat for a seizure of power and a fight without equal, by means of language.

The worst thing about this is not the often-quoted p-sentence. Even worse are the sentences just before and after it, you can do anything repeated. The embedded p-phrase just serves as one example of what you can do. Stated here in all clarity is that you can do everything – if you only dare. Just do what you want, if you have the power, the money, and the position to do so. Obviously, you will not be disempowered nor punished.

Needless to say: this is not about sexuality.

Thesis 2: Text is information. Always

Text is information. But what kind of information? There is more in the game. A lot of emotion might be involved, some strategy, and intentions. There is always something that resonates beyond the content. The way that you say something, which words you use in which tone towards whom in which situation, actually says everything. Most of all about the speaker. What has been secreted here by the incumbent US president – the man, mind you, classically described as the most powerful man in the world – is shit in a comprehensive, fundamental sense. Shit in every sense. Big Scheiße.

At first, Scheiße is only a word though. A harmless little word. Let’s take a closer look: Where does it come from? Scheiße found its way into the Duden – the dictionary of the German language – in 1934. Exactly at the time when the Nazis came to power in Germany. It was in 1934 that Adolf Hitler united the offices of the Reich Chancellor and Reich President and called himself Führer. Coincidence or not: dictionaries are contemporary witnesses. They document the signs of the times. The word Führer surely did not accidentally come to its new special meaning. Of course, it was installed consciously. Also, nota bene, both Scheiße and shit are currently experiencing a renaissance.

Dictionaries are witnesses! In the way we speak, we shape our language and we shape the times we live in. Terms come up like fashion, they mark trends, and vice versa: with the words we choose, we describe and evaluate what happens around us. We shape contemporary history with our speaking gestures, and with our linguistic actions.

Thesis 3: Language is a tool. Everywhere

Text is language, designed. In the moment we speak or write, we design (with) language. If we are aware of this – if we know our vocabulary and what we can do with it – we gain countless possibilities. A clear attitude, a bit of communicative skill and practice, some experimentation perhaps, and we will discover a world of possibilities. We can make a change via language. It is a wonderful tool. We can achieve our goals and we can reach other people via language. With words, we create our life, our world, and we shape every encounter, each and every day. Everything.

Forevermore, we are also recipients of language. Probably every one of us may remember beautiful words forever, but also deeply disturbing messages; a sentence that has caused sadness or shock, a word that has hit us hard. How could we forget? Words work.

It is crucially important for us to pay attention to our language and the language of those around us. What do they want to tell us? What do we want to say? What impressions do we want to make? Every day we decide what to do with words. We decide if we use language to exercise power, manipulation and marketing, for instanceor if we want to exude beauty, magic, and meaning.

It is in our hands.

Stephan Kreutzer

Future Of Text

Let me try to briefly describe a certain Future Of Text that has largely been abandoned since the advent of Word, the Web and DTP. In his book “Track Changes”, Matthew G. Kirschenbaum reconstructs the otherwise already forgotten history of word processing: in contrast to today’s use of the term, it initially referred to a model for organizing the office, then to electronic typewriter machines and later to a wide array of software packages reflecting every imaginable combination of generally useful features and affordances for manipulating text on a computer. From print-perfect corporate letters to authors changing their manuscripts over and over again instead of relying on the services of an editor, electronic writing had to develop around purely textual aspects because of the pressing hardware limitations at the time. Naturally, the early hypertext pioneers expected a new era of powerful tools/instruments that would augment reading, writing and curation way beyond what humankind had built for itself for that purpose so far. Today we know that this future isn’t what happened.

Text by its very nature is a universal cultural technique – and so must be the tools and conventions involved in its production and consumption. Consider a whole infrastructure for text comprised of standards, capabilities, formats and implementations that follow a particular architecture analogous to POSIX, the OSI reference model and the DIKW pyramid. Such a system would need to be organized in separate layers specifically designed to bootstrap semantics from the byte order endian to character encoding, advancing towards syntactical format primitives up to the functional meaning of a portion of text. Similar to ReST and its HATEOAS or what XHTML introduced to Web browsers, overlays of semantic instructions would drive capabilities, converters, interface controls and dynamic rendering in an engine that orchestrates the synthesis of such interoperable components. Users could customize their installation quite flexibly or just import different preexisting settings from a repository maintained by the community – text processing a little bit like Blender with its flow/node-based approach.

Is this the insanity of a mad man? Maybe, but we have seen variations of this working before with some bits and pieces still in operation here and there. This is not rocket science, this is not too hard, it’s just a lot to do and few are actively contributing because there’s no big money in foundational text technology anymore. By now, much better hypertext and hypermedia tools are urgently needed as an institutional and humanitarian cause. The Future Of Text and its supporting infrastructure can’t be a single product, trapped in a specific execution environment, corrupted by economic interests or restricted by legal demands. Instead, imagine a future in which authors publish directly into the Great Library of everything that has ever been written, that’s constantly curated by crowds of knowledge workers for everybody to have a complete local copy, presented and augmented in any way any reader could ever wish for. If this is too big for now, a decent system to help with managing your own personal archive and the library of collected canonical works would be a good start as well.

After cheap paper became available and the printing press was invented, it still took many generations of intelligent minds to eventually figure out what the medium can and wants to be. Likewise, the industrial revolution called for a long and ongoing struggle to establish worker rights in order to counter boundless exploitation. With our antiquated mindsets and mentality, there’s a real risk that we simply won’t allow digital technology to realize its full potential in our service for another 100-300 years, and the Future Of Text might be of no exception.



A freely licensed software application for creating 3D models and animations.


The method of launching higher stages of complexity from lower, more primitive stages. This is how a computer operating system boots itself up from a single electrical impulse caused by the push of a button to the BIOS firmware, starting the operating system and finally setting up user space applications. Douglas Engelbart proposed a similar concept for exponential improvement in which the lower stages are used to create a much better, new stage, on which the process is repeated all over again.


The generic term describes a category of software applications for navigating text/hypertext. Their main function is to request/retrieve resources (local or remote) and to parse/interpret them for subsequent augmentation. Semantics of standardized meaning allow the browser to recognize special instructions within the payload data of a resource so it gets a chance to apply custom settings when preparing the presentation for the user. Typical browsers for the Web can’t interoperate with local capabilities of the client or with servers different from the origin because they’re automatically executing untrusted code scripts sent by the current origin server of a session, so they need to be sandboxed in order to avoid endangering the security of the client system or the confidentiality of user data.

DIKW pyramid

“Data, Information, Knowledge, Wisdom” describes the hierarchy of the encoding of meaning in information systems. From the atomic data handling primitives of the physical carrier medium up to the complexity of good and wise use (or, say, meaningful interpretation), the model suggests that the implementation on each of the layers can be changed without affecting all the other parts because the internal matters of information handling of a stage are contained and isolated from the other stages. Michael Polanyi presents a similar notion in his book “Personal Knowledge”.


“Desktop publishing” refers to layouting for print using a computer, roughly resembling the same approach Gutenberg used – setting type by hand. Software applications in this category are for creating flyers and magazines. It’s not for writing nor for typesetting long-form text.


The concept of “Hypermedia as the Engine of Application State” recognized by ReST describes that capabilities of a system could be driven by data, not by the custom, non-standardized, incompatible programming of a particular application. “Hypermedia” refers to the notion that semantic instructions embedded in other payload data could operate corresponding controls and functions, maybe even as part of a larger infrastructure in which different components are plugged in to handle their specific tasks according to the standard they’re implementing. It could well be that still no real hypermedia format exists to this day that would provide the semantics needed for ReST, and the Web with HTML doesn’t support ReST either.

OSI reference model

The Open Systems Interconnection framework describes how transmission of data between two endpoints in a network can be interpreted in terms of horizontal hierarchical layers which are isolated against each other, so a change of implementation/protocol on one of the layers doesn’t affect all the other parts, avoiding inter-dependencies and a vertical, monolithic architecture.


The Portable Operating System Interface standardizes basic capabilities of a computer operating system. User-space third-party applications can be built on top and gain code portability in regard to other POSIX-compliant systems.


“Representational State Transfer” is the rediscovery of the semantics in the Hypertext Transfer Protocol (HTTP). The simple HTTP instructions allow the client to interact with a server in a standardized way. The hypermedia response may inform the client about new request options that could be taken automatically or manually. Furthermore, the semantics define URLs as arbitrary unique static IDs (not carrying any meaning nor reflecting/exposing the directory structure of the host operating system), so no custom client-side programming logic is needed to know about how to construct URLs that correspond to the schema expected by a particular server. Instead, a client is enabled to easily request hypermedia representations of a remote resource, make sense of it and adjust its internal application state accordingly.


The World Wide Web is a stack of protocols, standards and software implementations which was initially designed for accessing and navigating between separate, incompatible document repositories via a shared, common interface. Today, it’s mostly a programming framework for online applications like shops or games. The support for text and semantics is very poor.


Microsoft Word is a restrictively licensed software application in the tradition of earlier word processor packages. Its main purpose is to allow digital editing of short corporate or personal letters for print. It isn’t a writing tool nor designed for writing books, it’s not for typesetting or desktop publishing.


XML is a very simple format convention of primitives for structuring text semantically. HTML is a more specific format for defining the structure of a Web page. Since XML and HTML share the same origin of SGML, it makes a lot of sense to formulate the specific HTML Web page format in the general XML format convention, resulting in XHTML (eXtensible HyperText Markup Language). There are many tools and programming interfaces for XML available which therefore can also read XHTML: the Web could have become semantic and programmable via small agents/clients that implement hypertext/hypermedia capabilities. With regular HTML, a huge parsing engine is needed, especially because most published HTML is ill-formed and broken, leaving only bloated, sandboxed Web browser applications as candidates for guessing the structure of HTML documents.

Stephanie Strickland

Future Of Text

Most poets, often very early in life, are captured by sound and the cadence of words. Though channeled through finger-shaped signs, cursive writing, writing machines, and code—each mode more abstract—their loyalty to sound persists.

Being bearers of knowledge, mostly hidden to consciousness, and of conscientious protocols, poets feel charged to explore all chances for transformation available to voice, aurality, and text. Many question how human nature will survive, if it should, in technocentric pan-digital regimes. One or more of the following routes could be taken.

Include a pause option under user/player/reader/human control and align choices across an analog spectrum—slider, hovering cursor, dial—in all navigation for the hand, eye, or whole body in augmented or immersive realities.

Learn the extreme bias and influence of the training data used for your system. Make language to counteract or re-program it. We see shapes, AI sees textures.

Understand fractality—you don’t gain significantly more information by monitoring more than a fraction of the environment. Small samples serve as well (and thus better) than exhaustive explorations, because the same view can be found at multiple scales for many states of the world.

Provide on call at all times a translator to human scale (for time and space location, including multi-dimensional mathematical space and space supporting alternative geometries). Make another for multiple versions/outputs of human language, including “free fall” text, letters readable in free fall (on board a vehicle in space) from above, below, in any orientation.

Encourage physical intuition. Technology is its own life-form. Sing to it. New platforms are possible!

Mine the resources of entanglement. Teach Lynn Margulis. Poetry is emergent properties of interaction (between words, context, and source) and is a self-reinforcing, networking aggregate.

Maximize ambiguity=options for survival. Teach decoherence, the quantum-to-classical (mystifying to perceptible) transition. Write quantum error-correcting codes, texts that protect information not in single jittery qubits but in patterns of entanglement among many, as we once made reliable physical computers from flawed vacuum tubes. Or put it into holograms.

“The whole sport of peal-ringing grew out of the quest, intensifying at the end of the 17th century, to ring the extent on seven bells—to sound them in every possible order,” according to The North American Guild of Change Ringers. There are 7! (5040) permutations on seven bells! (Note the difference between those two exclamation points.) If we could become bell-ringers, change ringers, we might re-inaugurate the sport of enacting mathematical patterns, dances, with our bodies. The Future Of Text is its deep imbrication with complex mathematical patterns. We will become one with it, according to rules set in place now. Cuidado! Presta mucha atenciόn!

A future poem, of whatever sort, to hold fast with the past will seep like water drops or small currents of air into the juggernaut of its language to create the opening, the pause, which will disarm it.

Stephen H. Lekson

Text? Books?

I write books. Conventional paper, ink, pasteboard books. Boring books – I’m an academic. Which becomes my excuse for writing boring conventional books: they made me do it.

Specifically, I am an American academic in the humanities. The dead hand of history prescribes that American academics in the humanities must write single-author conventional books – weighty, substantial tomes – or they vanish. Publish or perish.

My actual job title was Curator of Archaeology (at my university’s museum) and Professor (in my university’s Anthropology Department). When I took the job, I assumed I would be curating, the care and feeding of the collections. (I curated before curating came cool.) But in analyzing how I might lose my job, I discovered that curation had little to do with it. If I was a mediocre curator, they might not increase my salary in the year-end evaluations. But that would be it. (Unless I sold the collections and fled to France…) The principal institutional cause for my dismissal would be insufficient research. And “research” at my and other American universities is measured by peer-reviewed publications. If, after my probationary period, I failed to crank out a hefty peer-reviewed monograph from a respectable press, they would snip off my buttons and drum me out of the regiment. I would be denied tenure: academic death and, of course, the loss of a cushy job.

That’s the model we drill into the heads of young humanities scholars: single-author monographs (followed, for the rest of a career, by single-author articles). And, if those young scholars achieve tenure, that’s the model they drill into the heads of their students: single-author monographs. And, as the philosopher said, so on and so on…

In the sciences, a multi-authored article (or two or three) in a first line, oft-cited journal might be enough, if your name appears high up in the list. The sciences are also more savy about online peer-reviewed publications. Humanities departments still want to hear a good solid thump when the product hit’s the Dean’s desk.

I did my boring books, got tenure, and then stretched out, a bit. Not all my books are dull, at least to sympathetic readers. Several successfully “crossed over” and became academic books from academic publishers that found a (small, niche) popular audience. I wrote a narrative history of a prehistoric place and time, which (I was advised) could not be done. I wrote a book with an uncomfortably New Agey title, on a theme which could be (and has been) hijacked by kooks; but which has also had a profound effect on my field. And over the years I’ve experimented within the constraints of the form, the conventional paper, ink, pasteboard book. Most notably with footnotes, which American archaeologists eschew. Layers and strings and lattices of evidence and inference propped up my narrative history of a place and time lacking actual documents. Initially, I presented those arguments in the text; it was unreadable.

So I banished them to voluminous footnotes – not merely simple citations but page-long essays. As published, there were as many words in the footnotes as in the text itself. Not everyone was pleased: I apologized to angry readers who bought two copies of the book to avoid paper-cuts from constant flipping of pages, text to notes. So in my next book, I convinced the publisher to put the notes (again, about half of the book by word-count) online. Making it easier, I hoped, to access the arguments beneath the narrative. And that opened a door to which I can point, but not myself pass through.

A single author cannot write a narrative history for prehistory. I did, but only by hijacking the work of many of my betters – and quite possibly bending their notions to my will. Footnotes brought other minds into my work. The necessary minutia and detail of archaeology (and other humanities) is beyond my capacity and I suspect most of my colleagues. (This is why we specialize, compartmentalizing the sprawling messy Big Story into digestible bits.) But as information accrues – archaeology is not static – the narrative must change. New discoveries support or destroy or modify accepted interpretations. In this, archaeology is like science. (Think: Plate tectonics.) But robust narratives can accommodate new information. The narrative arc remains, but the sub-plots and details change. In this, archaeology is like history. (Think: biographies of Lincoln.) My solution, which I proposed in the last book I propose to write, was wiki. A moderated, multi-authored, dynamic, deeply-layered online creation that would change with new data or new ideas. A moving target, but with citations that take the user/reader all the way back to primary sources, and then forward through explicit lines of inference and deducation. All under-writing, literally, the narrative history that should be the goal of archaeology, but which American archaeology almost entirely lacks.

But would that get you tenure? Probably not. So: a portal I and my ilk shall not pass.

Stevan Harnad

The PostGutenberg Galaxy: Eppur Si Muove

It was a mistake to invite me to make a contribution to this volume on the Future Of Text. After all, in the past every single one of my future predictions have been wrong. In 1978 I founded a (print) journal of Open Peer Commentary, Behavioural and Brain Sciences (BBS), which published peer-reviewed articles judged by the referees to be important enough for co-publication with 20-30 critical mini-articles from experts across disciplines and around the world, followed by the author’s formal response (Harnad 1978, 1979). BBS was successful enough as a journal, but I had expected that the Open Peer Commentary feature would catch on, and soon all journals would be offering it. After all, I had merely copied it from the journal Current Anthropology, founded by Sol Tax in 1959 (Silverman 2009). A few journals did occasionally implement Open Peer Commentary but as far as I know no other journal used it as a mainstay for all their articles.

A decade went by. Arpanet became the Internet and in 1989 Tim Berners-Lee gave the world the web. So I thought maybe print had just been the wrong medium for Open Peer Commentary. I founded another Open Peer Commentary journal, Psycoloquy, an online one (also one of the first of what would eventually come to be called Open Access (OA) journals, their articles free online for all users), expecting that Psycoloquy, would soon supersede BBS, end the print journal era and usher in the era of both OA journals and Open Peer Commentary (which I had come to call “Scholarly Skywriting” Harnad 1987/2011; 1990; 1991).

Psycoloquy lasted from 1990 to 2002, but none of my predictions came true. Journals remained print-based and toll-access, and Skywriting did not catch on. In 1991 Paul Ginsparg created an online repository, Arxiv, for authors to self-archive the preprints of articles in physics. It really caught on in physics, so, in 1994 I posted a “subversive proposal” (Harnad 1995) urging all authors, in all disciplines, to self-archive their preprints as the physicists had. That would make all journal articles free for all online, and then universities could cancel their subscriptions and pay for the peer review of their authors’ article output out of a small fraction of their windfall savings (Harnad 1998a, 1999).

This started some movement toward pay-to-publish journals, but little self-archiving, so no journal cancelations. Maybe authors had no place to self-archive? To help get the ball rolling we created CogPrints for research in cognitive science in 1997, but in over 20 years the ball has barely moved.

Perhaps researchers preferred to self-archive in their home institution’s repository? In 1999 Von de Sompel and Ginsparg created the OAI Protocol to make all repositories interoperable so it would not matter which one you deposited your paper in. But authors still were not self-archiving. So in 2000 a Southampton doctoral student designed a new, free software, EPrints, that would allow any university to create a customized interoperable repository of its own (Tansley & Harnad 2000). MIT soon poached the student who then created DSpace. Both softwares were widely taken up all over the world (Registry of Open Access Repositories, ROAR).

I was sure the open repositories would now fill (Harnad 2001), what with the launch of the Budapest Open Access Initiative in 2001, under the support of the Open Society Foundations of the philanthropist George Soros, followed by the Berlin Declaration on Open Access in 2003. But no: Most authors were still not self-archiving. Maybe they needed incentive from their institutions and funders. who accordingly began to adopt self-archiving mandates (Registry of Open Access Repository Mandates and Policies, ROARMAP) in 2004.

We kept providing evidence that if well designed, mandates work, generating self-archiving and increasing research uptake and impact for authors and institutions (Harnad & Brody 2004; Hajjem et al. 2005; Brody et al. 2006; Gargouri et al. 2010; Harnad 2009; Vincent-Lamarre et al 2016). We even designed an automated “eprint request” button so that authors who were too timid to make their self-archived papers OA during a publisher “embargo” could still provide “almost-open-access” to users during the embargo (Sale et al. 2014).

But most institutions and funders just adopted wishy-washy mandates, so institutional repositories were and have remained near-empty, button or no button. Wrong again. Meanwhile, while self-archiving (which had come to be called “Green OA”: Harnad et al. 2004) has been languishing, pay-to-publish OA journals – both benign ones and bogus ones – have been flourishing. This came to be called “Gold OA.” The Subversive Proposal had been aimed at eventually making all journals Gold OA, but not before Green OA self-archiving had first prevailed and prepared the ground for “Fair Gold” OA, charging for and providing only peer review (which journal editors manage, but peers do for free; Harnad 2010; 2015). But instead of (1) institutions and funders mandating author self-archiving first and (2) waiting for Green OA to become universal and (3) to make subscriptions unsustainable so that (4) cancelations could force publishers to (5) downsize to Fair Gold OA, what happened was that publishers offered “Fools Gold OA” pre-emptively: Pay to publish at an arbitrarily inflated price, designed to sustain publishers’ existing revenue streams and modus operandi. And the institutions and funders bought it! So I was wrong yet again (Harnad 2016).

Nor has Scholarly Skywriting prevailed. That would have required journals to become interactive blogs, hosting Open Peer Commentary on all their articles. But there’s no revenue in that: Why should they bother to do it? The revenue is in the Fool’s Gold OA articles.

So I called it quits; I stopped archivangelizing. But I’m not ready to concede that I was wrong in principle. I still think that what I always called the “optimal and inevitable” outcome will prevail (Shadbolt et al. 2006). It will just happen a lot later than it would have needed to. We’re still in a mixed ecology of partial green OA, subscription/license access and Fool’s Gold OA. My latest (and last) prediction is that once Fool’s Gold OA has become universal, Open Peer Commentary itself will be autonomously “overlaid” on the peer-reviewed OA corpus. I never believed in “open peer review,” overlaid on unrefereed preprints. That’s not how peer review works; authors need to be answerable to their peers before an article is tagged as “refereed and published,” with the journal’s imprimatur and track record, “safe” to be used, applied and built upon (Harnad 1998b). That’s the part Fair Gold had been meant to pay for. But if peer review is instead paid for via Fool’s Gold OA, so be it. It’s still OA, hence ready for the real overlay; Open Peer Commentary, implemented by a new generation of meta-editors and software, independently of the Fool’s Gold Business, and furnishing the interactive capability that our brains evolved for with the advent of language and the oral tradition, restoring scholarly and scientific interaction in the PostGutenberg Galaxy to the speed of thought instead of the sluggish pace of writing and print (Harnad 2004).


Steve Newcomb

Three Conjectures

Texts are chains of symbols, with each symbol’s significance dependent on its context. So far, text’s history is a sequence of two versions, with the earlier one implying the later:

Text v.1: “Natural Text”: Not originally invented by human beings, text enabled the existence of the entire biosphere which eventually included the human species. Using an alphabet of four nucleotides, nature’s memory technologies write, store and recall instructions that boot up and complete the construction of a reproductively functional individual of each living species.

Text v.2: “Artificial Text”: With no knowledge of natural text, human beings independently reinvented text. It enabled long-haul, low-memory-loss cultural evolution, in turn enabling human beings to accomplish far more than any single individual or generation could conceive, much less realize.

Now we stand at a pivotal moment in the history of text. Our culture has just begun to gain access to the formerly unknown and still only partially scrutable realms of natural text -- to the design and maintenance of species. Commerce between the universes of natural and artificial text has begun, and the distinctions between the two are blurring as never before.

What’s Text v.3? Maybe we can make some guesses by extrapolating the progression already observable in Text v.1 and v.2.


Natural texts are recorded in a single medium -- a single memory technology -- while artificial texts appear in a large and growing number of media.

Media of Text v.3: Perhaps there will be new classes of media based on new insights as to the nature of memory, semiotics, perspective, context, identity, quantum superposition, etc. Text v.3 may be a holographic medium that makes ambiguity more accurately and compellingly representable. For example, it may be easier to represent ideas whose appearance depends on one’s perspective.

Easy Conjecture: The memory technology of the biosphere -- nucleotide chains -- will increasingly be a medium for artificial text. Natural systems tend to be parsimonious, and nucleotide chains offer an outstanding value proposition in terms of cost of materials, manufacture, ownership, maintenance, duplication, stability, accuracy, and longevity.


The contents of natural texts appear to be restricted to whatever didn’t prevent earlier organisms from reproducing (natural selection). The contents of artificial texts have no such historical, purposeful, or preservational restrictions. New kinds of artificial content appear often. Perhaps most importantly, artificial texts can include cautionary tales about what didn’t or won’t work. They are the reproductive specifications of cultures, obeying the law of natural selection in a new way.

In view of the fact that artificial text emerged from the effects of natural text, it’s not surprising to see some phenomenological similarities between them. Both natural and artificial texts define, bind, and perpetuate species of communities. A complex organism is a community of specialized individual cells; each cell serves specific needs of the whole organism. In exchange, each cell derives its livelihood from its presence and participation in the whole organism. Similarly, a human culture is a community of specialized individual human beings. Each individual has relationship to the whole community that is analogous to that of a cell in an organism. In both organisms and cultures, the functioning of each member is largely anticipated in the founding text.

Optimistic Conjecture: Given the historical sequence of...

(1) species: cells-in-community reproducing cells-in-community via natural text

(2) cultures: humans-in-community reproducing humans-in-community via artificial text

... what’s the next step? Perhaps we are privileged to witness the birth-pangs of...

(3) whole-biosphere community. Will it be bound together by some as-yet-unknown hybrid of natural and artificial text, employing behaviour-determining and immunological approaches already implicit in natural texts but not yet understood by us? How will a whole-planet community reproduce? What larger communities could such a community eventually participate in?

Cautionary Conjecture: Our cultures will either adapt to the unprecedented power with which they endow individual human beings, or we may all die. Consider the power wielded by any individual in possession of knowledge sufficient to manufacture fertilizer bombs, fentanyl, 3-D printed firearms, etc. Or to tweet the moral equivalent of shouting “Fire!” in a crowded theater that already smells of smoke. Or to compromise the functioning of any number of digital infrastructures critical to civilization as we know it. Most of us are not bomb throwers, but some of us are.

Now the growing commerce between natural and artificial text has significantly increased the risk of holocaust. One indicator is that the widespread CRISPR/Cas9 natural text editing technology now endows knowledgeable individuals with the power to fundamentally affect the functioning of the biosphere. For example, an inexpensive, insecticide-free proposal would employ the technology to extinguish a species of common tropical mosquito, aedes aegypti, a vector of several deadly diseases. This plan may well end human suffering from malaria, but its other effects cannot be fully predicted. In any case, there may now be no way to protect aedes aegypti from destruction.

We’re in a blood feud with these diseases, and not everyone will delay action while studies are completed and argued about, and misery plagues yet another generation. Unless we find a way to abate the risks that individals may independently take, we’ll be vulnerable to the effects of artificial/natural texts that could jeopardize the ecosphere’s ability to support human life.

Maybe we’ll develop -- and be newly vulnerable to -- some new memory technology that can prevent individuals from ignoring or forgetting certain specific facts and risks. The development of such a technology will be, among other things, an immunological breakthrough -- yet another chapter in the arms-race section of the story of text. Natural texts contain records of approaches that may offer insight and didn’t kill the organisms that used them, at least not before they reproduced. After the new weapon is deployed into the biosphere by some well-intentioned entity, every individual will naturally assume responsibility for the continued functioning and prosperity of the biosphere, the planet will become humanity’s farm, and humanity itself will be numbered among the herds that everybody nurtures and protects. Right?

Stuart Moulthrop

Generated From

Run 007

The Future Of Text will be exercised. Who are they when she's at home? Rafael Nepô and Trace Beaulieu know something of the Future Of Text. What good are notebooks? We used to know a text that could totally deconstruct this one. The Future Of Text called. They want their nextness back. Alors! The Future Of Text includes complexity. The Future Of Text appears to be on the wires. If we'd known you were approaching we would have baked. For real! All text is against the future. It has not been possible to calibrate the Future Of Text. Don't look now. The Future Of Text began in June 2053. It lasted many weeks. 'The Future Of Text' does not use the letter X. The Future Of Text is not a dream. What you don't know about the Future Of Text could fill a hefty volume. As the anaesthesiologist said to the equerry.

Eat your entanglements. Animal logic is less engaging than the Future Of Text. Keep reading, we dare you. Dame Wendy Hall, Adam Cheyer, and Katie Baynes have been dreaming the Future Of Text. The Future Of Text will be compromised. We used to know a text that could totally deconstruct this one. What they don't know about the Future Of Text we'll figure out as we go. Talking about the Future Of Text is like imagining mind. The Future Of Text is surely not a dream. As Patti Smith tends to say, 'joy.' The Future Of Text is absolutely not safe. In the future text will be more cryptologic. As H. Marshall McLuhan allegedly said, 'pixel.'

The Future Of Text remains over our heads. In a convex mirror. I can put forth at least eight ideas about the Future Of Text. The Future Of Text has concluded in several proximate dimensions. Élika Ortega and Sarah Walton know about the Future Of Text. Ken Perlin as well. While writing the Future Of Text we must imagine the story of ink. I used to know a text that could run dotted rules around this one. Always ensure, never feel. The Future Of Text relies upon recursion. The gizmonic phoneme is the grand Future Of Text. The suture bespecked <FAULTCODE 1331><respawning> Meanwhile. They go without reason. The previous sentence does not contain the words 'future' or 'text'. Assuming that's a distinction you observe.

Talking about the Future Of Text is the Future Of Text. While advancing the Future Of Text we must uphold the archive. The Future Of Text is black. Cryptoculture must be more engaging than the Future Of Text. Nevertheless. What I don't know about the Future Of Text could fill a hefty volume. If the Future Of Text were whiskey it would be utterly intense. The Future Of Text is the Future Of Text as the text of the future. Maybe. Suppose the Future Of Text is written in the oceans. They have quite a bit to say about Future Of Text. The Future Of Text begins Sunday, 16 November 2022. (Though 16 November seems not to be a Sunday.) The Future Of Text remains birdsong. All that I think I know about the Future Of Text I borrowed from Ta-Nehisi Coates. The Future Of Text is non-Euclidean.

You must illuminate the Future Of Text. Quantum gravity seems to be less germinal than the Future Of Text. But of course. The future of this text is uncertain. Let's do this one last time. Or: 👎 🙈 🛀 🧙. Ripped from today's headlines: THE TURF OF TUXTEE. The Future Of Text relies upon algorithm. 'This is not done in Waukesha' is possibly text. The law won. Everything I know about the Future Of Text I got from Ian Bogost. The Future Of Text is not viral. Writing about the Future Of Text is the Future Of Text. In the future some of us will know text as a lifeform. In the future the text will be more or less infinite. The Future Of Text is in the animal and the machine. You can make a killing in text futures.

The Future Of Text called. They want their sequel rights back. As they say: FEET HEX TURF TOUT. In the summer of 1967 a bunch of us moved into this text in the Library of Congress and I guess there's no such thing as an innocent docent! The Future Of Text must be more material than the future of drones. This text is toward the future. Some detail could be found for mopery. The previous sentence does not contain the words 'future' or 'text'. The Future Of Text started in the dawn of forgetting. It lasted twenty-six semesters. TROUT TUX HEFT FEE. The Future Of Text is brought to you by pataphysics.

What we don't know about the Future Of Text you can make up as it comes. In the future most text will be more or less scriptonic. Small pieces loosely joined. Eat your vegetables. The previous sentence does not contain the words 'future' or 'text'. Dave Winer, Marc Canter, and Rikki Lee Jones know about the Future Of Text. Too much, recursion! The Future Of Text has entirely concluded in most universes. Some of us used to know a text that could letter the carpet with this one. The Future Of Text is yonder. The Future Of Text is more plausible than the future of desire. The Future Of Text is certainly not disruption. The intelligent page is the grand Future Of Text. FUR FEET TO HEX TUT. Talking about the Future Of Text is the Future Of Text. 'Start that' is possibly text. People refuse to believe in the Future Of Text. Looking at you, Lyle Skains. Silence. Text. Grammar. Root.

Run 012

Could there be a game in this text? The things of which this text is aware include: canvas, Hittite grammar, wordhoards, dedicated morphology, everything not included in this list, and knife sharpening.

Can we get away with this?

Why would you think that?

Everything I claim to know about the Future Of Text I got from Ada Lovelace. Text is membrane. 'The Future Of Text' does not use the letter D. The Future Of Text begins Thursday, 27 May 1901. (Though 27 May probably isn't a Thursday.) The Future Of Text is brought to you by Bing of the Nebelungs, Rafael Nepô, and Shane Gibson.

The text probably knows this already, but there is evidence of a scathing review in the auditorium. While advancing the Future Of Text we must consider dust. You have everything to say about Future Of Text. The Future Of Text is not the Future Of Texting. I went to Orlando and all I got was this chatroom. As the epistemologist said to the farrier. Unknowing complicates the Future Of Text. The Future Of Text will not be exercised. I can put forth six arguments about the Future Of Text for starters! If Jonathan Coulton and Trace Beaulieu had known about the Future Of Text they would have stayed in a farm upstate.

Do you think the text understands?

How would I know?

ELO the band. David Johnson and Gerard Manley Hopkins have been dreaming the Future Of Text.

Nothing that glitters must evolve. The previous sentence does not contain the words 'future' or 'text'. The sky's the limit in text futures. The Future Of Text will be compromised. But of course... Don't know if there's a future in this text thing. The synthetic platform is the grand Future Of Text. No simple way to say this. Mark Amerika and Thomas Malaby know something of the Future Of Text. Anna Anthropy as well. Does this make any kind of sense? Some of us have nothing to say about Future Of Text. In the future this visible text will be supple. The Future Of Text is scientifically intractable. Notes toward a subprime fiction. The Future Of Text began in December 1996. It lasted thirteen ticks. As the anesthesiologist said to the boundary maven. Writing about the Future Of Text is like holding the light.

I go without objection. The previous sentence does not contain the words 'future' or 'text'. We can say the Future Of Text is inscribed in lives. Don't eat that.

Is the text aware of this?

That would be telling.

In a convex mirror. As we used to say: dah doo dah dah. Nightingale just might be the Future Of Text. I made my fortune in text futures. Oh my! In the summer of 1968 a bunch of us moved into this lightly foxed text in the Bodleian and I guess there's no such thing as an innocent docent! While writing the Future Of Text we must consider the story of ink. You can put forth five ideas about the Future Of Text. The Future Of Text is scientifically intractable. Okay everybody. The Future Of Text is the Future Of Text as the text of the future. Could be. With help from Dame Helen Mirren. Math dot floor left-paren Math dot random left-paren right-paren asterisk numpty-some right-paren.

Text us from the future when you get there. What good are notebooks? If the Future Of Text were a continent it would be utterly awesome. The Future Of Text begins then. The ergodic phoneme is the ultimate Future Of Text. And how! The Future Of Text requires iteration. I went to Orlando and all I got was Zoom fatigue. Our text is about the future. Forgetting obscures the Future Of Text. Katie Baynes, H. Marshall McLuhan, and Lori Emerson are into the Future Of Text. Mark Sample as well. Kernel can be the Future Of Text. The Future Of Text commenced in April 1996. It lasted many innings. Math dot floor left-paren Math dot random left-paren right-paren asterisk whatever right-paren.

Everyone sings to the kernel. The previous sentence does not contain the words 'future' or 'text'. Text is membrane. The Future Of Text is not as advertised. This text knows all about Canadian striptease, credibility, The Juliet Letters, everything not included in this list, Frog and Toad (the song), and hand models. 'The Future Of Text' does not use the letter C. As our texts become more self-aware we need to think about our screen time. Tervantic bormal moofoe periwheel is a text from the year 2771. People dwell in the Future Of Text. What I know about the Future Of Text I remixed from Huldrych Zwingli. Urban legends in their own minds. I went to Taroko Gorge and all I got was this stupid script. The Future Of Text is brought to you by unbeing. Depend on it. Is there a text in this text? 'This is not done in Ohio' is possibly text. This text knows itself. And a few other things as well.

I continue to believe in the Future Of Text. The Future Of Text must be more or less material than the future of drones. Suppose the Future Of Text is defined in time. The Future Of Text started in the dawn of forgetting. It lasted thirty-odd innings. Notes toward a subprime fiction. If the Future Of Text were a symphony it would be truly obscure. With apologies to Cassandra the Consultant. The future of this text is evident. They have something to say about Future Of Text. The Future Of Text will be analyzed. Pathworkers of the world.

Beasts. Phrase. Madness. Compile.

1000 words

About The Work

This project began at ELOrlando in 2020 when Frode Hegland, developer of the Liquid platform for "self-aware" documents, invited me to write a thousand words for a book on The Future Of Text. Scanning the contributor list, which contains many names I know and many more I admire, it became clear that anything I could say by ordinary means would at best swell the chorus, which my Welsh grandmother would approve, but not this time, Nana. How can a document be self-aware? Mr. Hegland has his own ideas, as do the other learned contributors. I am still trying to work it out, approaching the problem characteristically backwards or by negation. Which is to say, I grasp more clearly how a document can embody the opposite of awareness. All it takes is a little knowledge of web scripting and a fundamentally disordered relationship to language. Et voila. You can consume this folly either in a single dose or an all-you-can-stand stream. The static version is the default. Select the scrolling version if you dare. Disclaimers: though the names of real people appear in this aleatory text, anything said about them proceeds from non-awareness, or machine fiction, assuming that's a distinction you observe. Warnings for repetition, animation, idiocy. The project uses substitution grammars, which are really nothing more than versions of the old MadLibs party game: My [noun] would [always/never] be caught [gerund] with a [noun]. The only art, if art there is, comes from crafting the optional word sets, the poetics of which is a bit like painting with an airbrush. Like all articulated contingencies, the system exhibits a range of behaviours, from occasionally charming to reliably foolish. The whole thing is built in JavaScript, which I have come to regard as the cybernetic equivalent of a cheap six-string guitar.

Theodor Holm Nelson

Hypertext Reflections Ted Nelson

“Text”, with no connections, is like a man with no arms or legs–able to speak but not to point. A text can refer to other texts, but often ambiguously.

There are so many possible ways to extend text--

- I would like lists with running history--

you watch the list to see what you intended to do at a certain time,

then whether you did it or not

- I would like texts connected comic-book style

(panels with visible connections-- what Jason Scott has called

"Nelson documents"

What Tim Berners-Lee accomplished was to give us pointers--

- to specific other texts (URLs, URIs)

- to places in texts (anchors-- note that Doug had embedded

anchors too, I was incorrect about that)

However, this will not work with neldocs in general,

because link endsets have to overlap to any degree,

which Web anchors do not.

I do propose building a format that will do this,

allowing overlapping connections and pointers

- between strips of text

- between panels like a comic (see Xanadu Basics 1a)

The tricks are--

- unambiguous references to text sections

- overlapping of endsets


“Text” on a computer generally refers to a lump of characters in .txt format. Since the early systems of plain text, it has evolved mostly in one direction: formatting, with paragraphing and fonts, the .doc format (and later, .pdf).

First problem: Text can only be in one column.

There are many ways to extend text temporarily, but on closing they generally reduce to the one-column single lump.

Second problem: text, with no connections, is like a man with no arms or legs-- able to speak but not to point.

Tim Berners-Lee gave us a new format that could point-- his .html format added one-way pointers--

to specific server destinations (to web pages, the URL)

to embedded destinations (anchors)


I have always wanted documents that do much more. In my designs, documents could have multiple connected windows, links could have multiple ends and directions. (Various prototypes may be seen in Youtube videos “Xanadu Basics 1a” and “Xanadu Basics 2”).

Until now, these documents have required stabilized content, never changing. Each new version had to be built by pointing at pieces of a previous version. However, stabilizing content no longer seems like a practical approach. ===


I am working on a new format that will allow multi-ended links and allow editability. This format will not require stabilized content. By proposing a data format, I hope to avoid the pains of developing a viewer; I leave that to others.

The proposed .neldoc format will allow overlapping connections--

Like .html, this format will have embedded elements, but unlike .html and its relatives, they will reference points and ends of text sections.

Like all projects, its completion time is unpredictable.

part 2 : Narkup, A Propo