Skip to content

Update. Visual-Meta

I sent the following to my advisors, just after coming back from a lovely family holiday in the Isle of Wight.

Wendy, Les,

I just finished a Zoom with Wayned Graves from the ACM Digital Library and his colleague  Asad Ali. It was a very positive call, they really get the point and benefit of Visual-Meta. We will talk again in a few weeks, they want to start testing and will need time to think about how. They very much like the ’surfacing’ of the metadata and how it can add archival and interactive value to PDF documents.

It felt like a combination viva and marketing pitch, with them marketing the concept to me. It was quite moving. It was a validation beyond what I could have hoped for.

I also had a good chat with Mark Graham who runs the Wayback Machine at The Internet Archives who also really get the point (primarily archival aspect) and we are looking at how to test there too.

I’ll be meeting The Future Text Initiative group on the 24th (Vint, Ismail, Pip and David De Roure) to discuss how to take Visual-Meta further.

And of course The Future of Text book preview is happily here, something I sent to you earlier Wendy, but not to you yet Les, since you have not yet in there :-) https://youtu.be/FERMYTSb5c4 It’s getting plenty nice comments back.

So here is my thanks: I don’t know where this work will go, whether it will succeed or fizzle out, but I am so very grateful for the opportunity to get to this stage and that is completely and utterly because of the two of you. Thank you. Now, back to work for the final Upgrade paper. I’m going into isolation on Friday for my heart operation which is now scheduled two weeks from then, so I should have a good amount of time to finish the work. Here is the update to the twist of the PhD thesis which I think makes it much more focused and more testable. Do you agree?

My proposal is *enabled by* an approach I developed for this PhD, called Visual-Meta, which adds an appendix to PDF documents with meta information, semantic and formatting information, keeping this information on the same level as the body ‘content’ of the document, and allowing interactive reader software to extract utility from this information.

(The benefits are more robust citations when appended natively on production, more flexible workflow (copy to cite rather than having to go through a RM database or plugin) and affordances for advanced interactions).

I *propose* to build and test what I call Augmented References which are are enabled by Visual-Meta and extend by Hypertextual Citing. This will allow the reader to 1) view full citation information immediately when coming across a citation in the body of the document, 2) view the citation in-situ when viewing it in the References section and 3) re-order the Reference section based on the reader’s criteria, including viewing the citations sequentially as they appear in the document, with headings so that the reader can glance where they are used in the document, chronologically and so on, and also to specify how the citations should appear, including bolding document titles and more.

Frode Hegland
futureoftext.org
liquid.info

Published inThoughtsUpdatesVisual-Meta

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.