Last updated on January 15, 2018
The project is to build and research
Whether accessed through a conversation between two or more people, or via pencil and paper, spoken AI, AR, VR or a futuristic Direct Brain Connection , the information we need to interact with in order to solve problems needs to have some sort of symbolic representation for us to be able to access, view, interact and share it; Without a symbolic representation we have no way to grasp the information – no way to get a handle on it – and no way to develop our perspectives.
The development of such symbolic representations must then surely be a core aspect of knowledge work to develop for.
Of the senses available to us, vision affords by far the greatest bandwidth between external symbol representations and our prefrontal cortex where we do our higher level thinking:
We perceive about 12 million bits/second
(10 million from vision, 1 million from touch, and the rest scattered among the other senses).
16 bits per second is the bandwidth of consciousness.
Symbols are the Trojan horses by which we smuggle bits into our consciousness.
This is why I am working on researching Dynamic Views for my PhD, with the goal of having something solid to demo on the 50th anniversary of Doug Engelbart’s Great Demo.
Text interaction is a natural candidate to consider augmenting because of it’s long and rich history. Text has been put to use on various aspects of thought augmentation for millennia – the three and a half thousand year investment our species has made in the written word has produced a powerful recording and thinking system. However, thus far, there has been extensive research on reading and writing, concept mapping and outlining, hypertext including spatial hypertext as well as layouts and typography, but the research has been on the different issues in isolation because before interactive computers enabled the development of digital symbol manipulation, these aspects were indeed isolated.
The research proposal does not aim to build a free-standing test system but to be integrated into real-world workflows and as such it is integrated into a word processing workflow:
The Project: Dynamic View
The research project is around the notion of a ‘Dynamic View’ which is an interactive concept-map-like view of a word processing document. This Dynamic View can be instantly toggled in and out of from the word processing view when the user collapses the document through a pinch gesture on a trackpad, leaving only the headings of the document visible, creating a free-form thinking space with the headings as nodes. From this grows specific research questions, including:
• What can and should visual lines represent?
• How can and should nodes be represented?
• How can text not present on the screen be interacted with, including keywords, citations and links?
• What will be the most useful options for generating visual connections and layouts?
• How can such rich documents, with so much inherent meta-data work within the Socratic Publishing mode to provide the basis for a DKR?
A description and visual mock-up introduction to the Dynamic View is available here: liquid.info/view.html
To host the Dynamic View functionality I have developed (designed and project managed, not coded, I am not a programmer) a word processor for macOS which is available on the macOS App Store called Author, in reference and homage to Doug’s Augment system: liquid.info/author.html
Aim & Timeline
The research will first be focused on building a flexible interaction space in Author where experiments in interactions and visual styles can be carried out, with the aim of producing a shipping version 1.0 to demonstrate on the 50th anniversary of Doug Engelbart’s great demo, if it is good enough at that point, along with a community of other project. I am passionate about having something to show on that day which will truly honour him.
The community I am working on to realise this vision includes Ted Nelson (coined the term hypertext), Ward Cunningham (inventor of wikis), Bruce Horn (coded the first Macintosh Finder), and Adam Cheyer (of Siri fame). They have invented many parts of the wheel of this great puzzle for which I am grateful and I will continue to have regular, recorded dialog with them as a part of my annual Future of Text Symposium http://futureoftext.org which I co-host with Vint Cerf, co-inventor of the Internet, who has also been a substantial backer of the Author project.
The initial interactions which I have been able to dream up (‘being a dreamer is hard work!’ Doug once said to me) will simply be project managing a software project but once the basic pieces are in operation, which is expected to be done by Easter. Once this is done, more advanced notions can be tested – notions which rely heavily on immediate interactions and not pre-thought scenarios, where the smoothness of the flow becomes paramount and we will then hopefully research and invent a new medium. This will be in a manner of how one still image after another, once they are shown fast enough, at least 24 images a second, goes from being a static image and transforms into the new media of the moving image –what can advanced non-linear text interactions become?