Skip to content

Day: January 14, 2018

9th of December 2018

On the 9th of December 1968 my friend and mentor Doug Engelbart started his demonstration which would go on to change the world, with these words:

The research program that I am going to describe to you is quickly characterisable by saying; if in your office, you as an intellectual worker were supplied with a computer display backed up by a computer that was alive for you all day, and was instantly responsive to every action you have, how much value could you derive from that?
Engelbart, 1968

In the 49 years since his demonstration we have seen huge leaps in some aspects of how we interact with information on computer screens but we have largely ‘painted within the frame’ by still having our information manipulation paradigm defined by our legacy of paper documents and not advanced by the interaction opportunity made possible by powerful computers. Doug described this in a chat we had in his office:

…another funny thing about paradigms is the way people use computers is still so much like, hey, they can get it to open a page, I can scroll down through the pages, or now I can jump to another book. But you say look, that’s still hooked with a lot of what the printing press provided you with. Why can’t I have a reference link that points to a particular word or particular paragraph in any given document, or when it gets there actually highlights this particular sentence and shows me the first line of the next five paragraphs just as a view that helps me get the idea of what the… Those are the things we started working with, and moving around, scrolling  – jeez – why, can’t you jump by just saying where you want to go  – but people say that got to complicated the way you had it in your system…
Engelbart, 2013

In this spirit of his philosophy I propose to research how to build a powerfully interactive, non-linear textual thinking space for general knowledge work, starting with a user group of university students, which has full legacy compatibility with current work practices and document formats, to be presented at the 50th Anniversary of Doug Engelbart’s great demo, on the 9th of December 2018.

Much work has been done on interactive text, particularly at SRI and PARC as as well as during the early days at Apple and Microsoft and elsewhere, but these systems generally suffered from lack of an engaged user base and lack of integration to contemporary work practices and trying to compete in an ecology dominated by Microsoft, partially the Office Suite of software and Apple. What is different now is that the user has evolved, the development environment is much more open and there are new opportunities for legacy support from a solution we call Socratic Authoring.

During the first part of my PhD research and while working with Doug Engelbart I have been lucky enough to become a part of a hugely creative and passionate community around interactive text, including – in no particular order – Ted Nelson, Mark Bernstein, Dame Wendy Hall, Les Carr, Frank Shipman, Cathy Marshall, Jeff Conklin, Ward Cunningham, Alan Kay, Bruce Horn, Jack Park and Adam Cheyer and this is the community in which I will be doing my research. They have invented many parts of the wheel of this great puzzle for which I am grateful and I will continue to have regular, recorded dialog with the community as a part of my annual Future of Text Symposium (search: future of text ) which I co-host with Vint Cerf as well as through weekly development meetings (on Wednesdays, in a forum which supports all efforts to produce something to demo on the 9th of December: This research project is not being done in isolation and not standing on the shoulders of giants, it is rather being made shoulder to shoulder with the giants of our field.

Leave a Comment

Human & AI Speech Interaction, ‘Photo-Realistic’ AR & VR & Direct Brain Connection ‘DBC’


Apart, from the most onomatopoeic of sounds, spoken words perform the equivalent function as the written text and whereas speech between humans has a high fidelity of emotional intention bandwidth, its duration is limited to working memory plus long term memory for words which made an immediate impact to be stored in long term memory. Spoken commands and spoken replies from AI will become an ever more important aspect of how we interact with our information environment (such as via Siri, Cortana and Bixby) but that also does not detract from the power of visual symbolic thinking.

‘Photo-Realistic’ AR & VR

Even if the problem at hand does relate to a physical space the notation will unlikely be ‘photo-realistic’, it will be a symbolic ‘map’ to some degree. Photo-realistic imagery, both synthetic and recorded, on screens and in headsets, will continue to play an important role in human cognition and will provide new opportunities for rich interaction but that does not detract from the power of visual symbolic thinking.

Direct Brain Connection ‘DBC’

Whether the user employs speech interfaces or interacts with the information regarding the problem on a large computer screen, small smartphone screen, in VR or AR, there will need to be some representation of the symbols.

Even a DBC is just an interface, it will need to interface with something. This is why the issues surrounding visual symbols thinking are deeply human and deeply intertwined with multiple future interface technologies – just because Doug Engelbart’s imagination allowed him to produce useful systems in the 60s does not mean the scope for innovation is in any way exhausted.

Leave a Comment


In the context of my work and interactive text specifically, by ‘symbolic’ I am not trying to be technical. I simply mean that ideas, thoughts and concepts have to ‘live somewhere’ in order to be communicated and interacted with. The question of whether un-expressed thoughts are symbolic is out of scope here, since this project concerns the interaction with symbolic representations.

Leave a Comment