Skip to content

Category: Visible-Meta

Making Information Self Aware

We can fight fake news and find more useful information in the academic and scientific publishing tsunami if we make the information self aware–if the information knows what it is. This is not a suggestion of Harry Potter level magical fantasy but a concrete act we can start with today and lay for foundation for future massive improvement.

the intelligent environment

Many years ago I read an interview with one of the developers of the computer game Crysis where he was lauded with the quality of the AI of the opponents in the game. He said that making the AI was not really the hard part, making the different parts of the environment aware of their attributes was key. If a tree trunk is thick, then the enemy can hide behind it. If it is dense then it will also serve as a shield, up to a point.

the self aware document

This is what we can and must do to documents. We must encode the meaning in documents as clearly as possible so that the document may be read by software and human. The document must be aware of who authored it, when, what its title is and so on, to at least provide the minimal context for useful citations.

It should also know what citations it contains and what any charts and graphs means what glossary terms are used and how they connect. Of course, we call this ‘metadata’ – information about information and the term has been used in many ways for many years now, but the metadata has so far been hidden inside the document, away from direct human and system interaction. We should maybe instead call it ‘hiddendata’. For some media this is actively used, such as the EXIF data in photographs, but it is lost when the photograph changes format, is inserted into other media or is printed. For text-based documents this is certainly currently possible but seldom actually used and not usefully read by the reader software and lost on printing.

bibtex foundation

You may well feel that this is simply a call for yet another document format but it is not. This is simply a call for a new way to add academic ‘industry-standard’ BibTeX style formatting of metadata to any document, starting with PDFs, in a robust, useful and legacy friendly way, by simply adding a final appendix to the document which follows a visually human-readable (hence BibTeX) and therefore also machine parseable format.

As this will include who authored the information, which the reading software can ‘understand’ and make it possible for the user to simply copy text from the document and paste it as a full citation into a new document in one operation, making citations easier, quicker and more robust. Further information can be explained for reader-software parsing, such as how the headings are formatted (so that the reader software can re-format the document if required, to show academic citation styles in the preference of the reader if they are different from the presence of the author), what citations are used, what glossary terms are used and what the data in tables etc. contains and more.

more connected texts

This is making the document say what it is, where it comes from, how it’s connected, what it means, and what data it contains. This is, in effect, making the document self aware and able to communicate with the world. These are truly augmented documents.

This will power simple parsing today and enable more powerful AI in the future in order to much better ‘understand’ the ‘intention’ of the author producing the document, by making documents readable.

This explicitly applies to documents and has the added benefit that even if they are turned into different formats and even if they are printed and scanned they will still retain the metadata. The concept is extensible to other textual media, but that is beyond this proposal.

visual-meta

I call this approach Visual-Meta and it’s presented in more detail here liquid.info/visual-meta.html. I believe this is important and I have therefore started the process of hosting a dialog with industry and I have produced two proof-of-concept applications, one for authoring Visual-Meta documents and one for reading and parsing them: Liquid | Author and Liquid | Reader: www.liquid.info

paper

Digital capabilities run deeper than what previous substrates could, but even in the pursuit of more liquid information environments we should not ignore the power of the visual symbolic layer. We hide the meta at our peril – we reveal it and include it in the visual document and gain robustness through document format changes and even writing and scanning, gaining archival strength without any loss of deep digital interactivity, something which matters more and more as we live and discover how brittle our digital data is and how important rich interactivity is to enable the deeper literacy required to fight propaganda and to propagate academic discoveries often lost in the sheer volume of documents.

Furthermore, with the goal of more robust formats and supporting reading of printed books and documents, addressing information (as discussed in the Visual-Meta addressability post) can be printed on each page in the footer to allow for easy scanning of hand-annotated texts to be OCR’d and entered into the user’s digital workflow automatically. Digital is magic. Paper is also magic. One day they will merge, but until then there is value to be had to use both to their strengths.

 

As we make our information aware,
we increase the potential of our own awareness

 

 

Leave a Comment

Liquid | Reader is finally live

Nice.

What this means is initially that when someone copies text from a PDF and pastes into Liquid | Author the text will automatically be a citation.

What it means for the future, if this approach is taken up, is that context, which is crucial for any information, will no longer be buried inside the document with all the fragility that entails, but surfaced for access by the human reader to ‘manually’ interact with, software to parse and servers to analyse.

It means that archiving becomes one step less frail, with all the contextual metainformation kept on the same level as the actual body of the document. A Visual-Meta document can change formats, be printed, scanned and OCR’d and it makes no difference for future metainformation extraction.

I am not one for blowing my own trumpet, I have barely been able to ‘sell’ Liquid | Flow (which gives the user a myriad of useful commands instantly responsive) and Liquid | Author (which provides a unique dynamic view of the users text) but I feel strongly that the Visual-Meta approach is important and needs to be taken forward by the community.

Leave a Comment

Addressability : (supplemental augmentation for Visual-Meta)

Note that the ‘document_name’ is distinct from the title and can be set automatically by the authoring software to help identify the document through search later. The unique name will be the first 10 characters of the title, author’s name, the time in condensed form and a random 4 digit number. For example:

augmentinghu_douglas_engelbart_19621021231532_6396.pdf

  • 1962 | 10 | 21 | 23 | 15 | 32
  • year | month | date | hour | min | seconds

Document Based Addressability

This approach allows the user to click on a citation and have the PDF open if it is available to the user, not simply to load a download page. If the document is not found, an opportunity to search for it will be presented.

High Resolution Addressing

Enacting a linking in this style is an active process initiated by the Reader software so adding an internal ‘search’ to the processes will allow the software to not only load the document but to open it at the section cited..

4 Comments