Skip to content

Day: September 30, 2018

Still Hope

As I sit here in our new home with my beautiful baby boy Edgar napping and my wonderful wife Emily working around the house, I am trying hard to put together another article describing why and how this text stuff needs to happen. I think there is still hope Doug.

I really hope Edgar will grow up with more powerful intellectual tools.

The article I am working on is called “In This Information War, Arm The Citizenry”. No link yet.

 

 

Leave a Comment

Compressed Scrolling

When we scroll through a textual document on a computer system the body of the text quickly accelerates into an illegible blur, with headings not far behind in loosing their legibility and thus utility.

I am a gamer, particularly interested in the Battlefield franchise, which is not only spectacular to look at but also has a very well developed sense of movement and weapon and equipment manipulation: https://www.youtube.com/watch?v=UZW4cPUIVf4

I have experimented with gestures for a while, including gestures to pinch documents (using your trackpad or your iPad) to collapse/compress the document so that body text disappears and you have a table of contents/outline instantly.

However, what about changing the view based on whether you are still or scrolling, giving a smooth operation into something quite different, like a modern computer game might accomplish? The point here is to ask WHY the user is scrolling, not to simply copy the analog scroll from Egyptian times.

The user is scrolling because she wants to look at another part of the document, which is why Doug Engelbart called this navigation rather than simple scrolling (I expect). So there idea here, which I feel we should put resources into investigating, would be to change the document on scrolling, to maybe move the headings closer and make the body smaller and more grey, apart from any names (or other custom requirement such as instantly replacing company names with logos on scrolling) so that the user flips into a navigation/overview mode when scrolling, not simply shuffling a paper replica.

To find out of this is indeed useful or just a fancy demo would require a very flexible and capable graphics system to experiment on. I think this is crucial work.

I want this guy to have the best reading and authoring experience when he grow up (sidetone, this picture was taken on a  smartphone (iPhone XS Max) and I therefore think our text environments have a lot more power to offer. Let’s explore…):

5 Comments

Voice Interaction for Text

I have long argued against voice interfaces for information manipulation since they interfere with the visual-dexterity operations of reading and writing. There is good reason why no-one has asked for a system where you speak ‘turn the page’ to turn the page since it would take you out of your internal mental world and break your flow.

However, there are interesting developments being opened up, such as the linguistic support for macOS Mojave which Howard Oakley mentions in his blog https://eclecticlight.co/2018/09/27/mojaves-linguistic-support-a-promising-start and Apple’s APIs for machine learning being included in iOS and macOS, as coreML https://developer.apple.com/machine-learning/ provide opportunities for very rich text manipulation.

What can be reasonable to design systems for now includes such interactions as:

  • Show me only sentences with names,–remove headings,–no, not these names over here, ignore them–highlight ‘Steve’ in red.

Designing the interfaces for this can quickly include a lot of buttons or commands to memorise and that is an issue.

However, I recently splurged and bought an Apple iPhone X S max which is very, very fast (capable of 5 trillion operations per second!) and which processes speech commands near perfectly and near instantly.

It is becoming clear that we must start experimenting with flexible views based not just on linear commands but also on analysis (coreML) and interactions for this can benefit from speech, where the system does not need a ‘hey Siri’ prompt and which is aware of the on-screen/in-document data it is working on, including the results of previous operations for continual builds of views.

This could be backed up with a tokenised command bar where the commands are added as tokens when spoken and acted upon, both to make sure the user is happy the commands are correctly interpreted and also for the user to then visually edit them as desired and share/save the command set as a ViewSpec if useful.

I feel this warrants serious consideration.

 

1 Comment