I rediscovered this old (1999) interview with Doug Engelbart and consequentially Ted Nelson which both touch on their alternative paradigms to information management. Here are the juice parts extracted. I found the Engelbart to be more lucid that Nelson :)
About fifty years ago, december 1950 and early 1951, I said to myself ‘let me commit the rest of my career to seeing how I can maximise my contribution to mankind’.
B: Do you think people understand your “framework” more now than they used to?
E: Not particularly, no.
The paradigms seem to be ‘oh, we’re going to automate things we do now, automate the way we do business’ etc., and the idea of really augmenting, people are beginning to register, but it’s in a limited conceptual frame.
We need to think about how to boost our collective IQ, and how important it would be to society because all of the technologies are just going to make our world accelerate faster and faster and get more and more complex and we’re not equipped to cope with that complexity. So that’s the big challenge.
What really connected for me was thinking about what an augmentation system really is, that humans learn how to live within and with social structures and conventions and facilities and tools, so all that is one giant augmentation system.
New social online media could be starting to do this, twitter, facebook etc
We desperately need to be getting better at our collective ability. And on the other hand, if we don’t get proactive in user organisations, the human system side of it is going to get pushed by just the technology.
This seems to be happening today with a focus on new technology rather than new directions.
B: There’s something I’ve noticed from talking to [Ted] Nelson a little, and reading Bush… that this technology is modelled on how the mind works. Unlike normal technological evolution, hypermedia was originally modelled on the human system, not vice versa.
E: Wow, yes. Right.
But then the real potential is that the way it worked in the past has been very much affected by the environment that we’re in. And so you’ve got an environment now that’s so different you can really rethink the way you harness the basic machinery, you know basic cognitive, sensory motor machinery. Our whole concept of how we symbolise and manipulate and portray our concepts. So that’s when I said “oh boy, we’re looking at something where you have all kinds of optional views of your document”, so we started building that in the sixties, and that’s still not into the web world yet.
E: So something arising from that, evolving out, in order to establish the kind of common properties that need to be in the knowledge package, the knowledge containers. There are quite a few more things that need to be there and a lot more evolution has to get specifically pursued, so in order to do that, there’s properties that you build into the documents as well as the functions that you employ to operate on them. So for instance, in our environment, we would never have thought of having a separate browser and editor. Just everyone would have laughed, because whenever you’re working on trying to edit and develop concepts you want to be moving around very flexibly. So the very first thing is get those integrated.
Then [in NLS] we had it that every object in the document was intrinsically addressable, right from the word go. It didn’t matter what date a document’s development was, you could give somebody a link right into anything, so you could actually have things that point right to a character or a word or something. All that addressibility in the links could also be used to pick the objects you’re going to operate on when you’re editing. So that just flowed. With the multiple windows we had from 1970, you could start editing or copying between files that weren’t even on your windows.
Also we believed in multiple classes of user interface. You need to think about how big a set of functional capabilities you want to provide for a given user. And then what kind of interface do you want the user to see? Well, since the Macintosh everyone has been so conditioned to that kind of WIMP environment, and I rejected that, way back in the late 60s. Menus and things take so long to go execute, and besides our vocabulary grew and grew.
And the command-recognition [in the Augment system]. As soon as you type a few characters it recognises, it only takes one or two characters for each term in there and it knows that’s what’s happening.
B: And with command-line, you have more control.
E: Right, so what you have is a vocabulary control, because you can use real verbs and real nouns, and nouns will actually tell you what class of object you want to do something with, you want to copy a character, or copy a word, copy a whole paragraph.
B: So how does all this inform the web?
Our proposal is, let people experiment with different kinds of interface that have a common vocabulary underneath.
So you could flip into your own interface when you need it. Then let people start experimenting with much more flexible ways of doing it, experiment with functions and nouns and verbs that the more elementary or what we call pedestrian users aren’t ready for. But they all work over the same knowledge domains. So that kind of environment is what you have to do to get evolution happening. And you have to do it with open standards for documents. So, I say no proprietary ownership of the class of functions you’re going to employ.
The words which killed me, which exiled me were:
Easy to learn, natural to use.
So everybody is supposed to ride tricycles because they’re easy to learn, natural to use. But the world shouldn’t live on tricycles, past the age of six. That’s the best analogy I can give to the world* do you want to be locked into tricycles because they’re easy to learn?
B: So, really the problem with markup is that it is embedded? If it were not embedded, we could distinguish between form and content?
T: Yes… that’s all. Just put it aside. Now you have pure stuff which can be put in one box, scanned and transcluded. That was always one of my fundamental ideas. That you want to be able to re-use materials freely without over-committing.
B: So, having a language which is not embedded in the text would allow for things like more addressibility, directional links.
The idea of ease of re-use is a good GUI design objective. Engelbart is more clear and concret on the possible implementation details I think.
B: If you had to list the problems, limits of the web as it is today that Xanadu could improve on, what would they be?
T: Breaking links, one-way links, no annotation, no transclusion, copyright problems. That’s pretty much a summary. But there’s an even deeper paradigm conflict, because having everything instantly re-usable, and easily re-usable–frictionlessly, was always a principle. You see, as soon as you start listing things, you’re out of understanding the paradigm and into understanding features. This is aspects, features. Aspects and features are listable. Er…
These ideas can be fixed but within a closed system as you need trust (or not spam)
T: Basically, I have the philosophical view that everything is completely interconnected. Or as I like to say, intertwingled. And there are no boundaries or fields except those we create artificially, and we are deeply misled by conventional boundaries and descriptions. That’s my summary of general schematics… I was very annoyed with linear forms of writing, because you always had to cut things off.
This refers also to the quote above about organising content as the mind does. A connectionist model. I think Ted has to also realise that people can only read in a linear fashion. I guess structurally it doesn’t have to be as is his thing about being able to use footnotes everywhere.
I was always extremely sophisticated.
Ted seems to mainly object to linear text. He has a wondering mind and doesn’t like to focus too much on one subject (‘subjects’ he doesn’t believe in). He see SGML, HTML, XML all your -ML’s to be flawed because they are linear, since they are by their nature in a file which is sequential. XML adds hierarchy but ultimately this is still linear. I think he does have a point about there being a “paradigm conflict” with his view (and possibly all human brains) of information. “I believe that embedded markup, daily more tangled, will implode and leave HTML as an output format, supplanted by deeper editors and deeper hypermedia forms.” [‘Embedded Markup Considered Harmful‘]. I think he’s right in that the web is just a surface form and needs deeper structures to be all it can be.
Doug on the other hand is famous for his invention of the mouse and all the GUI ideas that come with it and addresses many of the implementation ideas and challenges and even points out why there is a necessity based on evidence. Checkout his augmentation papers for more details.