Skip to content

Category: Rich PDF

Making Information Self Aware

We can fight fake news and find more useful information in the academic and scientific publishing tsunami if we make the information self aware–if the information knows what it is. This is not a suggestion of Harry Potter level magical fantasy but a concrete act we can start with today and lay for foundation for future massive improvement.

the intelligent environment

Many years ago I read an interview with one of the developers of the computer game Crysis where he was lauded with the quality of the AI of the opponents in the game. He said that making the AI was not really the hard part, making the different parts of the environment aware of their attributes was key. If a tree trunk is thick, then the enemy can hide behind it. If it is dense then it will also serve as a shield, up to a point.

the self aware document

This is what we can and must do to documents. We must encode the meaning in documents as clearly as possible so that the document may be read by software and human. The document must be aware of who authored it, when, what its title is and so on, to at least provide the minimal context for useful citations.

It should also know what citations it contains and what any charts and graphs means what glossary terms are used and how they connect. Of course, we call this ‘metadata’ – information about information and the term has been used in many ways for many years now, but the metadata has so far been hidden inside the document, away from direct human and system interaction. We should maybe instead call it ‘hiddendata’. For some media this is actively used, such as the EXIF data in photographs, but it is lost when the photograph changes format, is inserted into other media or is printed. For text-based documents this is certainly currently possible but seldom actually used and not usefully read by the reader software and lost on printing.

bibtex foundation

You may well feel that this is simply a call for yet another document format but it is not. This is simply a call for a new way to add academic ‘industry-standard’ BibTeX style formatting of metadata to any document, starting with PDFs, in a robust, useful and legacy friendly way, by simply adding a final appendix to the document which follows a visually human-readable (hence BibTeX) and therefore also machine parseable format.

As this will include who authored the information, which the reading software can ‘understand’ and make it possible for the user to simply copy text from the document and paste it as a full citation into a new document in one operation, making citations easier, quicker and more robust. Further information can be explained for reader-software parsing, such as how the headings are formatted (so that the reader software can re-format the document if required, to show academic citation styles in the preference of the reader if they are different from the presence of the author), what citations are used, what glossary terms are used and what the data in tables etc. contains and more.

more connected texts

This is making the document say what it is, where it comes from, how it’s connected, what it means, and what data it contains. This is, in effect, making the document self aware and able to communicate with the world. These are truly augmented documents.

This will power simple parsing today and enable more powerful AI in the future in order to much better ‘understand’ the ‘intention’ of the author producing the document, by making documents readable.

This explicitly applies to documents and has the added benefit that even if they are turned into different formats and even if they are printed and scanned they will still retain the metadata. The concept is extensible to other textual media, but that is beyond this proposal.


I call this approach Visual-Meta and it’s presented in more detail here I believe this is important and I have therefore started the process of hosting a dialog with industry and I have produced two proof-of-concept applications, one for authoring Visual-Meta documents and one for reading and parsing them: Liquid | Author and Liquid | Reader:


Digital capabilities run deeper than what previous substrates could, but even in the pursuit of more liquid information environments we should not ignore the power of the visual symbolic layer. We hide the meta at our peril – we reveal it and include it in the visual document and gain robustness through document format changes and even writing and scanning, gaining archival strength without any loss of deep digital interactivity, something which matters more and more as we live and discover how brittle our digital data is and how important rich interactivity is to enable the deeper literacy required to fight propaganda and to propagate academic discoveries often lost in the sheer volume of documents.

Furthermore, with the goal of more robust formats and supporting reading of printed books and documents, addressing information (as discussed in the Visual-Meta addressability post) can be printed on each page in the footer to allow for easy scanning of hand-annotated texts to be OCR’d and entered into the user’s digital workflow automatically. Digital is magic. Paper is also magic. One day they will merge, but until then there is value to be had to use both to their strengths.


As we make our information aware,
we increase the potential of our own awareness



Leave a Comment

Email from User Brad Stephenson

Brad Stephenson is a Liquid | Author and Flow user who got in touch asking where Liquid | Reader is since he had seen it on the website. I apologised that it had been held back a bit and asked him why he was interested. His reply read like a manifesto for why we are building it so I asked him for permission to post his email and here it is:

Your mention of Reader as a helpful Literature Review tool caught my attention. My understanding, in simple terms: Reader would open PDF documents and allow copying of text within the app, then when pasted (assumedly in Author or another word processor) the bibliographic meta data would be embedded and automatically pasted with the text (I wasn’t exactly sure how it would be displayed and read _ code or text). Additionally the document could be read and highlighted in its entirety then Reader would include a feature which allowed only the highlights from the document to be displayed for review and assessment. This of course would make citing from PDF’s more streamlined and efficient.

A major frustration for me in completing my dissertation was in relation to citation software. My institution began with RefWorks as recommended by the library research assistant. The next year they dropped the contract with RefWorks, and recommended Mendeley, the following year dropping Mendeley and supporting Zotero. What I discovered was citations downloaded in compatible format from the library websites opened in Zotero were unreliably formatted. Instead of making the job easier it made it harder. If I were doing the project again I would probably build a bibliographic database using a spreadsheet, and manually insert footnotes. Your description of Reader indicating an ability to have the citation data embedded with the text which would be easily pasted and referenced for bibliographic or footnote usage.
Brad Stephenson
Leave a Comment

Update (June 2019)

A friend asked for an update to introduce someone to the book project and the work in general so I thought I might as well post it as an update here in my journal:



With that aim I have developed an interactive text utility for macOS (I’m afraid all my work is in the Apple ecosystem, for my sins) called Liquid | Flow which allows the user to use a myriad of commands within half a second to search for highlighted text or to translate it and more. The site for this and my main project; Liquid | Author, is


Liquid | Author is a minimalist word processor with powerful tools for the digital age. It is not hamstrung by attempting to mimic paper but liberated by enabling rich interactions: Collapse the document into an instant outline with a pinch of the trackpad and see all occurrences of any text without having to scroll through the document looking for yellow boxes. Add citations from Books, Web, Video & Academic Documents instantly, and search any online resource in half a second.

It is also the first word processor with an integrated Dynamic View for freeform thinking, brainstorming, concept mapping and mind mapping. You can see the new 2 minute video demo:

Visual-Meta & Reader

Finally, and I perhaps most importantly, is my notion of a visual-meta information system, for which I am building a new PDF reader called Liquid | Reader (I don’t use my imagination on naming things I have been told). It should be in the App Store around next weekend.
The origins of the approach is acceptance that PDFs are embedded in the academic (and business) world and that the act of ‘freezing’ information at the point of publishing is useful and important, but it should not be a struggle to utilise a document’s meta-information for such basic and core uses as citing a document.
It is based on the premise that documents should be readable, both by humans and systems and this is done by adding a visual meta information section at the end of the document. Please have a look at the blog post, with contains a roughly made demo video using Author and Reader to make this happen:
Leave a Comment