After a very successful launch of the book and presentation of the augmented text software at https://futuretextpublishing.com/first_edition_launch/ it’s time to focus on the next steps of the PhD and also to look at how I can manage to make the software viral to a degree.
The new MacBook Air with the M1 Apple Silicon chip is doing something wonderful to me, tickling me about the Mac in ways I can’t really remember since Apple’s Industrial Revelation commercial from the 1990s: https://youtu.be/ecP_xoKrQ9Y?t=7 go watch it, it’s worth it. I’ll wait.. Seriously, have a look.
From what I have read about the history of the development of the Apple Newton this was actually Michael Tchao’s speech to Sculley to sell the idea of Newton to him. But never mind that (I might be mistaken, I might not be, but it’s a bit of colour). That commercial is electric with hope. And with the ML power of the M1 I am finally ready to embrace AI as a tool to help authors and readers. We’ve got the power. https://youtu.be/j1BNcSBApOU
And I think ML augmented text can make waves and make money. Here is the main thrust of the idea, which came out of discussions with Timour Shoukine in Croatia many lifetimes ago: Make a ‘Pro’ version (or ‘educators version’) of Reader which uses the visual-meta information embedded in a student’s Author created PDF to automatically analyse aspects of the student’s paper to make it faster to evaluate the student paper.
It could mean automatic ML analysis of basic grammar, reading/authoring level and basic checks for plagiarism. This report could be enough to flag sub-standard presentations so that the teacher could reply with a single click to send the document back to the student with the results of the analysis: “This document seems to have a degree of plagiarism you should address and the reading level is not up to what is expected of you”. That could be the type of messages teacher’s could send back in a click, showing the results of the analysis. The document could also be annotated automatically to indicate where there are grammar or possible plagiarism and so on.
It could further provide summaries of the different chapters and of the document itself as a cover page to give the teacher a quick way to grasp the contents.
Visual glossaries could be built by the teacher (or pre-supplied based on discipline) so that certain words have certain colours to help the teacher skim through the work and still get a sense of flow. Doug and I tested this and it worked very well. This could be an on/off feature or it could be activated when scrolling only, to help navigation but not interfere with readability.
This could also mean creating purely visual interactions such as letting the teacher view the References in different ways, to quickly see if canonical work was covered and how the citations are spread through the different chapters and where repetitions happen, as suggested by Les Carr, and also to let the teacher click on citations in the body of the text to quickly see in a little pop-up what the source material is (as suggested by Mark Anderson, and which we have now implemented), to check veracity but also to check on material the teacher was not previously aware of.
And so on. I cannot wait to find out what ML on the M1 can give us. Visualisations, yes! Interactions, yes! How about clicking on a person’s name and choosing to see all the sentences in the document where that person is mentioned, even when referring to the person as ‘her’, rather than name? Natural language processing, as Bruce Horn keeps putting into my head, can be wondrous and can help the harried teacher reader.
And of course, there is nothing to stop the enterprising student from buying a copy of Reader Pro (at perhaps $10?) to run through their work before submitting it, making it a useful tool to check their work and save wasting their and their teacher’s time.
We won’t remove the reading aspect of the teacher’s job–it is after all a prime function for a teacher, but we can remove reading a lot of crud and help them more efficiently get the student to up their game.
The thing is of course, visual-meta makes the contents of the document much more structure to facilitate this type of analysis, interaction and visualisation.
Can we up our game to deliver on this? Do you think it will sell? I think so.