Skip to content

Category: PhD

State of My Art (week two at Southampton)

I’ve started my PhD studies and as such I have started diving in to the literature. One of the first things I did was look at Craig Tashman’s PhD thesis for LiquidText since it clearly has some commonalities to my own work. I am inspired by his work but it is also quite different from what I am trying to do. Anyway, I think I should spend a little time on defining the State of My Art in terms of how I see the issues and potential solutions at this stage. This is a bit of a hodge lodge at this point, with reading and writing features mixed in, and I hope I can stretch it quite far, then maybe edit, but this is what I have today…

Fundamentals

• It takes less energy to produce information than it does to organise or assimilate the information. This leads to information overload.  

• What is important to note however, is that information is not most fundamental – interaction is. This is because information arises out of interactions, and this gives us a deep insight as to how to most efficiently deal with the information overload: increase the interaction we have with the information.

 

Documents

Information is stored in many forms. My particular interest however, is documents. To really start from first principles I have to ask what a document is, what the point of a document is:

 

• A document is the framing of a human perspective. This highlights why documents are important: the process of thinking, justifying and presenting a coherent argument 

• The intent of a document is to convey this perspective. 

 

Document Component Connections

 

It is important to note that the useful information in the document is not only contained in the text – it is also contained in the relationship between different pieces of information – and in the structure of the document. This highlights specific needs for specific types of interactions.

Furthermore, it is important to note the different motivation and processes which lead to writing vs what leads to reading:

Why Write

 

•  Writing is a way of recording information for the author’s reference later, for ‘publication’ or a combination of the two.  

 

The Issue With Writing

•  Writing is the process of transcribing thoughts into coherent, grammatical sentences and sentence fragments. Our mental constraints means that a fully structured and cohesive model of what we are saying or typing does not come out smoothly in one continuous stream – we simply don’t have the mental space for this to occur. A result of this is that editing the writing becomes an inherent part of writing but because the reason that editing is needed is our lack of mental space, this becomes mentally taxing and software systems have not developed rich editing environments. The result is too much redundant text and not enough connections and structure. 

Why Read

 

•  We read to gain an increased understanding of a subject. As such, we can greatly benefit from well-ordered, clear, properly cited documents, but this is not what writing naturally produces, so we need to build tools and systems to help the author, who has different motivations and energy, produce what the reader will benefit from, whether the reader will be themselves in the future or someone else. 

•  When we read to understand, rather than simply for the entertainment value, regardless of whether the material was well written and well structured, we benefit from active reading (Craig Tashman term, central to LiquidText), including making annotations. 

Key Issue : Too Much Text, Not Enough Structure

A key issue, which we must not ignore, is that there is way too much text to read. There is too much crud around the important, meaningful text and the meaningful text itself can be belaboured and repeated. 

Also, the text is not explicitly well structured. 

Approaches to Deal With This

Here is a list of a few of my actual ‘inventions’ or ideas I have somehow gotten from others. Either way, these are concrete approaches for helping us deal with the key issue of too much text.

It is important to let the reader and the writer flexibly change the view of the document. This includes, and is quite a familiar theme in the industry most of this, the ability to:

 

•  Zoom to an overview and zoom back in, where the action of zooming in and out support different needs, such as showing more information when zooming in (summaries, notations etc.).

•  Change the view of the document to show specific text, information or relationships.

•  Remove unnecessary material through using summarisation techniques to remove repetitions etc.

•  Look up any text as with Liquid | Flow and more advanced. 

•  Compare sections of text with others in the document or elsewhere. This likely calls for both a mind map view and a hybrid mind map view.

Specific Functions/Features

The bridge between the underlying  fundamentals and specific ways to deal with them are not always clear. In some ways the fundamentals are merely inspirations for solving very specific human information interaction needs. I have suggestions to how Author can be, which are mostly modest features but I feel it’s important to have a track for a relatively normal improvement so that I can relax and think bigger… http://www.liquid.info/author-project.html for the list.

More Advanced Approaches

 

Input

Writing should be very quick to change between typing and speech. 

Action on Text

The user should be able to select text, say a word, a sentence or a paragraph and act upon this, not just to look it up, but as a basis for manipulating the document. This should include actions such as:

 

•  Show only sentences with this text (though sentences which refer to the text should also be included, as the High Performance Text Interaction Libraries should allow)

•  Show relevant occurrences elsewhere

Pleasure of Pinch

Pinching out to see an overview is not the same as pinching in:

You would pinch out to collapse the document into headings only, then pinch more and more to collapse the levels one by one.

You would then tap on to ‘jump’ to or just read/skim then dismiss to return to where you were reading. 

If you have pinched out and collapsed all levels of heading apart from the top level, you can pinch back in to reveal the underlying headings, one by one. Once you have all levels open you should be able to pinch yet again, to have a single sentence summary of each level appear. Pinch yet again and the document opens. Any notes the user has added should appear on this level as well.

Collapsing

The writer should be able to choose to collapse a full document and only work/see a section, to make it easer to get an overview. 

Lists should be collapsible and should be able to be horizontal as well as vertical. 

Notations

I define author added extra information as being Comments and reader added extra information as being Notations.

It would be valuable to take as much power and flexibility of paper-notation into the digital document, while retaining the power of the digital substrate. As such, a separate layer for notations should be provided for, while keeping accurate relationship with the underlying text. 

When selecting sections of text, the user should be able to:

 

•  Underline 

•  Highlight 

•  Annotate with text and links etc.

•  Draw anywhere

•  Create connections with other passages of text

It is important that the notations remain with the document when the user chooses to enter mind-mapping mode, below.

Commenting

A commentary layer for discussion of the content following a global standard, maybe the w3 standard, will be an important addition to the discourse.

Mind-Mapping

In support of multiple views, the document should be possible to turn into a card-based mind-map view, where headings are cards (which can be nested based on levels) and body text collapses into the cards for this view, though of course they can be expanded in this view as well. 

How to implement this view I am not sure, but it should feel tactile, like a pinch or a swipe. Maybe 4/5 finger pinch out? 

Processing

There is no inherent reason why the document should not be somewhat pre-processed for such things as adding definitions next to words which are specialised words in a subject area the reader is now familiar with (the reader would need to request this, or the system could learn over time what the reader knows). Also, basic summarisation could be done, to for example grey out all text in the document with is repeating information. 

Assistant

For writing for students, a type of assistant which the student can invoke which will ask questions about the document, starting with what type of document is being written, then asking where certain information is, as a template would dictate should be there, then the student selects the text and the system tags that section with that topic, or the student acknowledges that the section is not done and writes that section, then writes it and does the process again.

Publish

Much is being said about AI and this is one area where it can be useful. On ‘Publishing’ a document, rather than simply saving it, a series of AI modules can check for plagiarism, writing level and so on, as well as summarising the document so that the writer knows what was attempted to get across is actually what came across. 

Masses of Meta

Meta-Information should not be a crude add-on, but a centrally thought through aspect of the visual marks on the screen. Tags assigned in HTML editors today can easily be broken when you cut and paste. Tags should be robustly attached to the text and breaking tags through selecting text without consideration of the tags should be technically  protected. Tags should record times of writing, location and much more, to help the reader go through the material later.

Teacher View

Imaging the teacher having a spreadsheet with al students listed and updates on how they are doing with all their readings and all their papers in one view. It could even be colour coded to show how they are feeling, based on deep learning of large corpuses of documents written by happy, sad and people with other known states of mind. Many other pieces of information can be tracked here, with student consent, such as how often they read and more, to help the tecaher get a better understanding of students (particularly quiet ones) are doing. 

Structural/Infrastructure Issues

Stan Gould wrote to me: “I believe you have a nascent paradigm-shifting offering, whose greater value to society simply awaits the ability to frame it that way.  It is my opinion, that in our new era you will find that explicitly designing “frameworks, platforms & ecosystems” and conceptually placing your Apps within those conceptual organizational objects will both resonate with the marketplace and amplify your success efforts.” (docs.google.com)

He is right. It’s not enough to cobble together features in apps, we need to change the flow of education, to increase the value of the act of writing and citing well, to increase the power of active reading, while integrating with all the media accessible. This will be a big challenge. 

Citations

 

It is clear that citations will essentially need to be re-invented in the digital world, to help both student make them quicker and more clearly, and for teachers to better check them. This is a key issue.

VR Web

I was lucky enough to attend the launch of the UK and Ireland W3C office on Friday and among the brilliant people and brilliant projects, I heard mention of a W3C workshop on VR (the first one…) in Silicon Valley this coming week and then things just exploded in my head: In the same way we need standards and security – in the same way we need the w3C for ‘2D’ rendered web, we certainly need the same for the upcoming VR rendered web. I hope to attend the workshop, bit they are out of space and it’s very unlikely they will allow me in, but I hope I can be there, at the very start of this… Please also refer to my Universe Sculpture post: http://wordpress.liquid.info/wp-admin/post.php?post=2269&action=edit for issues related to this. 

Leave a Comment

Social Machines

Starting with A Taxonomic Framework for Social Machines* as our main text (from which all citations in this document refer, unless otherwise stated) we performed a literature review of the field of Social Machines.

The paper concludes with a working definition, which is a good starting point for understanding what Social Machines are, using the definition put forward previously by Smart and Shadbolt: 

“Social Machines are Web-based socio-technical systems in which the human and technological elements play the role of participant machinery with respect to the mechanistic realisation of system-level processes.”*

This definition builds on the one by Tim Berners-Lee and Mark Fishetti:

“Real life is and must be full of all kinds of social constraints – the very process from which society arises. Computers can help if we use them to create abstract social machines on the Web: processes in which the people do work and the machine does the administration.”*

Examples of Social Machines

A list some of the most familiar examples in this space, as highlighted by SOCIAM, taken from the site http://sociam.org/social-machines

Social Networks: 4chan, Facebook, Flickr, LinkedIn, Tumblr, Twitter

Public Service Social Machines: CollabMap, Crime reports, eBay, Fix my street, The search for Jim Gray, Ushahidi 

Knowledge Sharing Social Machines: Delicious.com, digg, Github, Quora, reddit, Stack Overflow, Wikipedia 

Citizen Science Social Machines: Duolingo, Galaxy Zoo, New Forest Cicada

‘Work’ & Computer/Human Relationship

The non-rigid distinction between computers doing ‘administration’ work and humans doing ‘work’ is highlighted* using examples of Wikipedia bots and the PicBreeder software system, showing that it is far from clear that humans have a monopoly and computer systems have a monopoly on clerical tasks. I think this distinction, based on their assertion that work really means ‘creative work’*, stretches the original notion of ‘work’ into a particular definition which may not be wholly appropriate, since there are so many other ways to describe human ‘work’, such as being about decision making, not necessarily about execution, in which making a decision for the computer to perform a ‘creative’ task would be valid. 

The notion of human and computers working together, utilising the strength of both in order to provide the human with high levels of efficiency, goes back to Doug Engelbart who wrote:

“By “augmenting human intellect” we mean increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems. Increased capability in this respect is taken to mean a mixture of the following: more-rapid comprehension, better comprehension, the possibility of gaining a useful degree of comprehension in a situation that previously was too complex, speedier solutions, better solutions, and the possibility of finding solutions to problems that before seemed insoluble.”**

And further still, by Doug Engelbart’s supporter and sponsor, J. C. R. Licklider’s, afew years earlier, in 1960:

“Man-computer symbiosis is an expected development in cooperative interaction between men and electronic computers.”*

And then back to Vannevar Bush who wrote about a need for more advanced means through which to work with information, which would lead him to explore the possibilities of augmenting work though a system he would call the Memex, which would be composed of, among other technologies, microfilm:

“The summation of human experience is being expanded at a prodigious rate, and the means we use for threading through the consequent maze to the momentarily important item is the same as was used in the days of square-rigged ships … A record if it is to be useful to science, must be continuously extended, it must be stored, and above all it must be consulted. ”*

Social

What is added with this perspective is the explication and highlighting of the ’social’ component, where social refers to interactions with multiple people as being the primary activity, not one person interacting with a computer system but enabling a wide range of activities that rely on a combination of decentralized human activity and computational processing.*

The focus on the social aspect, that is to say, the multiple user aspect, provides benefits of large numbers of ‘brains’ and in addition, social machines are able to exploit differences between individuals with respect to abilities, skills, insights, perspectives, knowledge, geographical location, experiences, group membership, social position, and so on.*

Web /  Internet

For what I imagine is political reasons, there is a perhaps undue reliance on ‘web’ when discussing Social Machines, rather than ‘simply networked’ or Internet: 

“…for our purposes, social ma- chines are cast as Web-based systems. Although we do not rule out the possibility of social machines that are independent of the Web, we suggest that the properties of the Web make Web-based social machines a particularly important focus of social and scientific attention. One virtue of the Web, in this respect, is that it enables us to tap into the capabilities of human agents in a manner (and on a scale) that has never been seen before. The Web is a social technology that interfaces with a large proportion of humanity. By firmly embedding itself within a human social environment, the Web opens up a range of opportunities to incorporate human agents into episodes of machine-based processing. This makes Web-based social machines capable of supporting processes that would be difficult or impossible to realize in other kinds of social (or indeed socio-technical) context.”*

Who Works With Social Machines 

People

 

The people associated with Social Machines were initially Tim Berners-Lee and Mark Fishetti by introducing the term in their book Weaving The Web.

Nigel Shadbolt, Daniel Smith, Elena Simperl, Max Van Kleek, Yang Yang and Wendy Hall wrote Towards a Classification Framework for Social Machines for the  International World Wide Web Conference Committee 2013 as part of work under The SOCIAM Project.

Organizations

The SOCIAM team is made up of researchers from the Universities of Oxford, Southampton, and Edinburgh, with researchers listed at http://sociam.org/team. From the project website: “The ultimate ambition of SOCIAM is to enable us to understand how the Social Machines evolve in the wild and what factors influence their success and evolution. Its aim is to develop both theory and practice so that we can create the next generation of Social Machines.”*

MIT has a Laboratory for Social Machines focusing on the development of new technologies to make sense of semantic and social patterns across the broad span of public mass media, social media, data streams, and digital content. 

Earlier Work

Information coming out of the paper ‘Towards a Classification Framework for Social Machines’ by Nigel R. Shadbolt, Daniel A. Smith, Elena Simper from 2013 helped shape the above paper. 

The Future

The papers referred to above do not address the coming of Virtual or Augmented Reality, but this level of interaction with, and through, computers will shape the notion of what a social machine can become, over the next few decades and beyond. 

The Social Machine: Designs for Living Online By Judith Donath (Donath, 2014) who spent some time at the MIT Media Lab, discusses implications of such a future, but I’m afraid I have not had the opportunity to read this title deeply yet. She provocatively refers to Ted Nelson’s ‘Dream machines’ (Nelson, google.co.uk) when she refers previous uses of the term ‘machine’ for ‘computer’ and Ted’s notions should be considered quite ripe for embodiment in VR. I finish with that thought.

1 Comment