Last updated on June 23, 2020
Today is your Birthday, Vint, and in gratitude for your support and encouragement over the years, I’d like to paint you a little picture of the kind of future we have been talking about.
As I slumbered lightly this beautiful summer day I could kind of see it in my minds eye: I was pushing and pulling elements of text. It looked very much like a very early Disney cartoon, perhaps Steamboat Willie, where everything is slightly pulsing with life – opportunities to move things around are manifest and endless – the promise of interacting with views to change perspectives, to change minds was there, made ‘flesh’ in cartoon lines.
As well as data flows, I had various documents open in front of me. I saw citations in the text and I could pull on them to open the full documents cited or I could pull just the names of the authors to see a web of author and citations. I could arrange this in a pile or on a timeline, simply by twisting the web into the shape I wanted it. It was like a kinetic toy with infinite possibilities.
I can imagine this future now, having for a very long time suppressed this thinking in order to focus on dreaming about what I could actually deliver in software for Author and Reader and Liquid interactions.
In this world, open documents can, at the flick of the wrist, appear as poems or as colour coded grammar or colour coded based on glossaries. It could even appear with common words slightly greyed out to highlight the words with more meaning. When I scroll backwards in a document at speed, the body text fades and contracts to make the headings more readable and any product names people names or company names get replaced by pictures or icons.
I can view anything by any criteria, navigate by any dimension of the information I am interested in. I can see and make connections like never before.
I tear off a snippet of text into a new document which lands as a full citation, and I add some more, then type and voice dictateI fill the document with text and pictures and video and sound and animation and interactions. The default view of the document is set and I save it as such, along with the document, then I send it off to you for comments, and you underline, highlight, add and delete and after a bit of back and forth, and some working on the document together at the same time, we publish the document to the cloud where it can be downloaded and read by people and AI alike, and referenced by object, not only by web location. In other words, we ‘publish it’ to the equivalent of Doug Engelbart’s ‘Journal’.
Someone reads the document but questions our citations so, like in an animated patch of garden, they gesture for the citations to float above the document and keywords are used and the citations clustered based on use in our field. The shape of citations appears in a familiar pattern to them but one source document does not look right, it is not cited enough by those they know, so the reader opens it and checks the other work of the author and decides to read it through, resulting in a mental breakthrough outside of the ordinary way of thinking we usually employ. We didn’t cite and have our citation ignored, the system made it worthwhile for the reader to check it out and quick and simple enough that the reader bothered.
Our paper is hammered to a wall with other papers on the same topic. The reader chooses to see and set connections at will, seeing a bigger picture and bringing in connections based on dimensions we’d never think of, save a view of this, then folds the whole wall into a semi-frozen image in a paper written as a riposte to our paper.
We are automatically informed of this new referent to our paper, we read it and when the argument is presented in this way we understand the author’s point and we update our paper to include this change. All he instances of our paper become aware of this update. Automatically.
All this is just a fun cartoon outline of rich interactions leading to rich insights.
In order to build such interaction spaces we must build the infrastructures to support it. We must make documents aware of their provenance, connections, interaction potential and structure. We must further work to ensure that the ‘operating systems’ allow for such inter-document and inter-application interactions–no single company will be able to build the perfect solution, only an ecosystem allowing for free data movement can enable evolution directed at augmenting our mental freedom.
In this world we could plug an Xbox or Playstation into our work environment and use the open standards of access and presentation to fly through our work, then review it later on our smart watch, share it on printed or digital paper, and later work on it further via voice AI.
In this world we would have inter-actions enabled for the user in the same way we have the inter-net enabled for data.