Skip to content

We can do anything

Question, Inspiration

I have been lucky enough to have a few conversations with the magical Alan Kay this year. When we last had to wrap up our lunch he said to me, plainly and clearly: “But you can do anything!” At first this bothered me since it’s obviously not true (of course I was wrong) and not feasible (again, I was wrong).

This morning of the last day of 2018, the last day of the 50th anniversary of Doug Engelbart’s demo, I get some time to put down my thoughts through my laptop and onto your screen, while my beautiful baby boy who has just started skiing (Edgar, 19 months old) is reading for his mum in the early, dark hours of our Norwegian mountain top cabin.

I have thought quite bit about what Alan said and it’s become clear (though I may still be wrong) that the point is not to come up with a neato-design and build it, but to come up with a design direction with clear goals and frames:


Let’s start with the frames. I feel it’s out of scope to look at radical new hardware at this point since we already have high-resolution, fast, networked devices, from large, stationary setups to mobile laptops and tablets, phones and even watches, all of which can contribute as access means to a system.

What I do not feel we need to worry about is legacy file formats or legacy ways of working since file formats can always be translated at the point where they need to meet the wider world and legacy ways of working will quickly and easily be seen as such when they come up against something radically better, which is of course our goal.

The only real frames we have to develop within is the human nervous system and that which can be computed and displayed by a computer system.


The goal must necessarily be quite loose in order to support a long evolution so let’s start with Doug Engelbart’s stated goal in his 1962 paper and add to it based on his further work. I paraphrase:

We must augment our capabilities to
approach complex problem situations,
to gain comprehension and to
derive solutions
And we must keep improving our capacities to do so.

The way it is stated, this could apply to numerous fields and aspects of human life and it’s clear that is what he intended. However, in this context I will apply it to the ‘fire’ of giving us the ability to flexibly interact with symbols. I immediately go to the notion of symbols since I do not think that the most effective route to dealing is with the issues above is to present a ’natural’ scene to the user. From the very earliest moments of human recording onto a substrate we have looked at ways to record that which has the most symbolic meaning, not simply slavishly copying down what is visually there in the world.


Symbols are the ‘stuff’ of recorded thought and deserve our outmost attention. We can further focus our design goal:

How can we design and co-evolve systems to
augment the readers ability to grasp the authors intention,
in a critical and contextual way,
while also citing the original work usefully,
and augment the users ability to produce further clarity in their own work?

Many will criticise my rabbit hole and might say I have simply provided directions to my own work and I will accept such criticism with the comment that there are of course other avenues to pursue, such as persuasive cinema and educational games and open, free-flowing graphs of variously encoded knowledge but the human ability to express oneself, to record and justify ones position and to make this interpretable and understandable by oneself in the future and by others will surely remain one core aspect of our intellectual and moral work.

Directions forward

To simply reproduce the marks on a passive substrate in the digital environment is not enough. We must experiment with various ways of increasing the interactivity of the symbols–as I have written, interactivity is the core aspect of symbols and of the universe.

As we experiment, with free-form non-linear spaces connected to linear reasoning (LiquidView), with colour coding based on velocity, and with the means through which we can integrate the resulting knowledge product into existing knowledge flows, we also need to look at the spaces of symbols themselves and how the symbols can most powerfully relate with each other in a document on a screen and also between spaces–how to connect and link them, how to experience pattens and movements.

We must think freely, with open minds and share our work.

And we must think with the user, not just for the user. We must listen to the user but also encourage a deeper literacy not heir part. If we, the tool people, can demonstrate more powerful ways of working, they will come. History is witness to this, but only when the more powerful ways of working a persuasively presented to a receptive audience, as audience we need to continually foster a hunger within.

We must think fresh thoughts.

We must also read more of what the founders of the field wrote. We we living in their shadows as well as in their light, we need to better understand their un-fogged boy daily use prejudice perspectives. Something I have promised Alan to continue doing.

Personally, I will continue to experiment with Liquid | Author and LiquidView, blog my findings and thoughts here on the blog and to host the Future of Text Symposium. How do you, dear reader, wish to produce and connect? Let’s make 2019 a banner year for breakthroughs in employing powerful computer systems to the aid of our thinking, not only to our distraction.

My dream is to look back within a few years and smile at how naive this all was and that it was obvious that the best solution lay somewhere completely different from where we looked, but it was only by going into the first of possibilities that we found this out.

Happy New Year!

P.S., please have a look at my recent, related blog post
Two New Year Wishes.

Published inFuture Of TextNotes On...Why?

One Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.