Skip to content

25 search results for "addressability"

Addressability : (supplemental augmentation for Visual-Meta)

Note that the ‘document_name’ is distinct from the title and can be set automatically by the authoring software to help identify the document through search later. The unique name will be the first 10 characters of the title, author’s name, the time in condensed form and a random 4 digit number. For example:

augmentinghu_douglas_engelbart_19621021231532_6396.pdf

  • 1962 | 10 | 21 | 23 | 15 | 32
  • year | month | date | hour | min | seconds

Document Based Addressability

This approach allows the user to click on a citation and have the PDF open if it is available to the user, not simply to load a download page. If the document is not found, an opportunity to search for it will be presented.

High Resolution Addressing

Enacting a linking in this style is an active process initiated by the Reader software so adding an internal ‘search’ to the processes will allow the software to not only load the document but to open it at the section cited..

4 Comments

Making Information Self Aware

We can fight fake news and find more useful information in the academic and scientific publishing tsunami if we make the information self aware–if the information knows what it is. This is not a suggestion of Harry Potter level magical fantasy but a concrete act we can start with today and lay for foundation for future massive improvement.

the intelligent environment

Many years ago I read an interview with one of the developers of the computer game Crysis where he was lauded with the quality of the AI of the opponents in the game. He said that making the AI was not really the hard part, making the different parts of the environment aware of their attributes was key. If a tree trunk is thick, then the enemy can hide behind it. If it is dense then it will also serve as a shield, up to a point.

the self aware document

This is what we can and must do to documents. We must encode the meaning in documents as clearly as possible so that the document may be read by software and human. The document must be aware of who authored it, when, what its title is and so on, to at least provide the minimal context for useful citations.

It should also know what citations it contains and what any charts and graphs means what glossary terms are used and how they connect. Of course, we call this ‘metadata’ – information about information and the term has been used in many ways for many years now, but the metadata has so far been hidden inside the document, away from direct human and system interaction. We should maybe instead call it ‘hiddendata’. For some media this is actively used, such as the EXIF data in photographs, but it is lost when the photograph changes format, is inserted into other media or is printed. For text-based documents this is certainly currently possible but seldom actually used and not usefully read by the reader software and lost on printing.

bibtex foundation

You may well feel that this is simply a call for yet another document format but it is not. This is simply a call for a new way to add academic ‘industry-standard’ BibTeX style formatting of metadata to any document, starting with PDFs, in a robust, useful and legacy friendly way, by simply adding a final appendix to the document which follows a visually human-readable (hence BibTeX) and therefore also machine parseable format.

As this will include who authored the information, which the reading software can ‘understand’ and make it possible for the user to simply copy text from the document and paste it as a full citation into a new document in one operation, making citations easier, quicker and more robust. Further information can be explained for reader-software parsing, such as how the headings are formatted (so that the reader software can re-format the document if required, to show academic citation styles in the preference of the reader if they are different from the presence of the author), what citations are used, what glossary terms are used and what the data in tables etc. contains and more.

more connected texts

This is making the document say what it is, where it comes from, how it’s connected, what it means, and what data it contains. This is, in effect, making the document self aware and able to communicate with the world. These are truly augmented documents.

This will power simple parsing today and enable more powerful AI in the future in order to much better ‘understand’ the ‘intention’ of the author producing the document, by making documents readable.

This explicitly applies to documents and has the added benefit that even if they are turned into different formats and even if they are printed and scanned they will still retain the metadata. The concept is extensible to other textual media, but that is beyond this proposal.

visual-meta

I call this approach Visual-Meta and it’s presented in more detail here liquid.info/visual-meta.html. I believe this is important and I have therefore started the process of hosting a dialog with industry and I have produced two proof-of-concept applications, one for authoring Visual-Meta documents and one for reading and parsing them: Liquid | Author and Liquid | Reader: www.liquid.info

paper

Digital capabilities run deeper than what previous substrates could, but even in the pursuit of more liquid information environments we should not ignore the power of the visual symbolic layer. We hide the meta at our peril – we reveal it and include it in the visual document and gain robustness through document format changes and even writing and scanning, gaining archival strength without any loss of deep digital interactivity, something which matters more and more as we live and discover how brittle our digital data is and how important rich interactivity is to enable the deeper literacy required to fight propaganda and to propagate academic discoveries often lost in the sheer volume of documents.

Furthermore, with the goal of more robust formats and supporting reading of printed books and documents, addressing information (as discussed in the Visual-Meta addressability post) can be printed on each page in the footer to allow for easy scanning of hand-annotated texts to be OCR’d and entered into the user’s digital workflow automatically. Digital is magic. Paper is also magic. One day they will merge, but until then there is value to be had to use both to their strengths.

 

As we make our information aware,
we increase the potential of our own awareness

 

 

Leave a Comment

Visible-Meta Example & Structure

The Visible-Meta appendices are automatically inserted at the end of the PDF on export, after any References or any other user-added appendices, with a normally formatted heading ‘Visible-Meta’. After the heading the text @{visual-meta-start} is inserted to tell the reader software to parse the following, with @{visual-meta-end} at the end. (for performance the reader software actually parses from the end of the document since there will be less text to parse before the visual-meta is found).

The format is in the style of the BiTeX Export format, though collapsed to safe space. This was chosen because of the ubiquity of the format and the human and machine readable style. It should be trivial for software to parse normally line-broken BibTeX to and from the visual-meta layout.

The font size is suggested to be 9 point to save space but this is not ‘read’ by the reading software so is up to the specific implementation choice.

Sections

  • version, generator & source
  • citation meta – bibtex
  • document name
  • formating
  • citations
  • glossary
  • special
  • generator

The explanatory text shown below each heading is to be included in the actual Visible-Meta to aid in its readability and adaptability.

Mockup

In this mockup all the descriptive text should be in the actual visible-meta document to make it clear for the reader (human or software) what the sections are. Note that the BibTeX entry for citation information follows the BibTeX layout but is collapsed to save space, in line with all the other sections.

Note, in the first example both ‘generator’ and ‘source’ are listed but only one would be used: generator if the visible-meta was added to the document on document production and source if the visible-meta was added after the document was produced, most likely by the user though a web browser plugin or other means.

–––––––––––––––––––––––––––––––––––

Visible Meta

@{visual-meta-start}

Version, Generator & Source An explanation of what this section is and code for for reading this is available (at the time of writing) from liquid.info/visual-meta.html The generator is the software used to produce the document and source is the software or system which provided the citation information if the visual-meta was not added by the generating software (this also shows the date):

@{{visible-meta}} version = {1.1}, generator = {Liquid | Author 4.6}, source = {Scholarcy, 2019,08,01}, }

Citation Meta – BibTeX Describes who the author of this document is in order for the reader to cite this document. The order of information follows the BibTeX format:

@article{ author = {Douglas Carl Engelbart}, title = {AUGMENTING HUMAN INTELLECT – A Conceptual Framework}, month = jul, year = {1962}, institution = {SR1},

Document Name The document name can be used to find the document, as described in this post wordpress.liquid.info/addressability-supplemental-augmentation-for-visual-meta/

augmentinghu_douglas_engelbart_19621021231532_6396.pdf

Formating Describes how the text, including headings, is formatted:

@formating{ heading level 1 = {Helvetica, 22pt, bold}, heading level 2 = {Helvetica, 18, bold}, body = {Times, 12pt}, image captions = {‘Times, l4, italic, align centre},

Citations Describes formatting of the inline citation style and the References section in order for the reader to parse how citing is done in this document:

@citations{ inline = {superscript number}, section name = {References}, section format = {author last name, author first name, title, date, place, publisher},

Glossary Describes glossary terms with plain text definitions and relationships with other terms:

@glossary{ term = {Name of glossary term}, definition = {freeform definition text}, relates to = {relationship – “other term”},  term = {Name of glossary term number two}, definition = {freeform definition text}, relates to = {relationship – “other term”}, }

Special: Dynamic View Any section prefaced with ‘Special’ in the header describes specific, special views the authoring software supported. As with the other formatting, it is optional to implement in a reader:

@Special{ name = {DynamicView}, { node= {nodcname, location, connections},

@{visual-meta-end}

2 Comments

Can't find what you're looking for? Try refining your search: