Skip to content

Digital documents are not interactive enough…

Digital documents are not interactive enough to truly augment our ability to tackle the serious problems we are facing, from climate change to social justice, from warfare to healthcare and beyond.
The result is that we are working with what is in effect ‘dead’ information.
A modern academic PDF does not know its publication date nor who authored it. It is theoretically possible for this information to be embedded in the PDF but this simply does not happen in real world usage. The result is that connections between documents, which are crucial stores of knowledge, is either through the mind of the user who looks up work cited or through brittle weblink which break over time.
Furthermore, a modern academic PDF does not contain information about the document’s structure, such headings, what associated glossary terms are,  referees used and so on, hampering interactions for reading.
A key part of the problem is the way the workflow of document production and consumption is segmented with different software developers having different areas of focus. There is no accessible way for the metadata which is in the manuscript document to travel through to the end user which can be both a human reader and systems which analyse the documents.
The consequence of this is that we are wasting an historic opportunity to create richly interactive information environments to truly augment how we think and communicate. And this, at an age where we truly need this augmentation.
If we do not embed rich metadata in our documents in a robust way, our documents will remain dead, web connections will continue to be brittle and much of our discourse will remain on private platforms optimised for revenue, not knowledge generation.
I used the analogy that that these documents are dead, so let’s look at life for inspiration. A cell, whether animal or human, operates at a scale where water molecules constantly jostle their environment. A cell is not blocked off from the world, it is what we could call ‘windowed’ (Godfrey-Smith, 2020) onto the rest of the world, it is embodied in an energetic flow and it cannot stay alive if it is shut off from the environment.
Evolution acts on this energetic flow precisely because the cells are windowed and not shut off.
Evolution cannot interact much with frozen, dead PDF documents.
This is because software systems cannot readily access their contents, structure, provenance or connections, so new software systems cannot evolve. If we manage to give our academic documents such windows for systems to interact with them, then we open up this knowledge and their containers to the power of evolution to come up with ever more powerful means through which we can better think and communicate.
And if we can make this robust, then we have deep opportunities for improvement.
And if we don’t, we are seriously retarding opportunities for how we interact with our knowledge. The problems of fake news and other media manipulations as well as the problem of too much research being released for any human to read in the traditional way, simply cannot be dealt with effectively through outmoded tools and dead documents.
I don’t think we can risk this and the investment in adding this vital metadata to documents on the same level as the body text is minimal and once it is part of the system the cost is near nil, though it will enable a whole host of new ways of interacting with our knowledge.
Inaction with improving how we think and communicate is more expensive than action.

This is why we have developed what we call Visual-Meta.

The concept is simple: Write at the back of the document, in an appendix, who wrote the document and when, what the title is for citing the document but also how the document is structured and how it is connected through citing other documents. The concept is not only simple, it is also cheap for the user to implement since this information is already in the manuscript document but it currently gets lost on export and pouching.
Visual-Meta is open, free, human and machine readable, extremely robust and in a pilot with the ACM as well as commercial companies. More information is on the http://visual-meta.info website, including a brief intro video and all the information necessary to implement it.
Published inThoughts

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.