Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
50th Anniversary of the Mother of All Demos (2018) (amasad.me)
302 points by jdcampolargo on June 9, 2022 | hide | past | favorite | 82 comments


I was very lucky to have worked with Doug on multiple projects and see his passion for computer science and humanity. He was one of the most humble people and was happy to share all his wisdom. I am so grateful to have had the opportunity to work with him. To be specific, I built out purple numbers for Doug. I remember demoing it to him, it was hacky, and the joy he showed made my soul smile. He was a very special man. I have many other stories, but that is for another day.


We will be here to read them when you are willing to to share. It's a treat to hear personal accounts of my idols.


Oh yes, please!

He must’ve been terribly disappointed with how things evolved. In the 1960s, he envisioned computers to keep track of all human knowledge for us, to help us find contradictions, fill knowledge gaps, and ask better questions. And what did we get instead? Clickbait and spam bots. Did he see it that way?


I would be very interested in a post with more stories!


And yet we still haven't learned enough from it.

The fact computers have been so isolated from one another (even with internet apps) is a complete tragedy. Even sending large files is a pain in the butt to this day.

The demo had multiple people working on the same computer remotely, with their own cursors and ability to do things on the system at the same time.

Best we have now is web-browsers syncing data silos, or a one way "let this person control my mouse + keyboard".

It feels like building a bridge by everyone making parts in isolation then expensively shipping it to the work site that only has one crane to assemble them.

Usually it's way easier to have multiple people work on something. Like carrying something heavy, or having a line of people walk the bridge with the right tools/verification to install the rivets right, or even the opportunity for someone to speak up present unique ideas to the group.

Even just a few basic OS primitives early on could have jump started so much collaboration tools and techniques that mirror how we all work together everywhere else in real life.

As an aside, the whole "meta-verse" craze might pull these ideas through to the mainstream again.

I see lots of demos of avatars standing at a table working on stuff together, and I've been on VR chat before drawing 3d images with crayons at the same time with other people.

It's incredibly fun to try and draw an elephant together and work on different parts at the same time to hilarious results.


Yeess. Recommendation, drop everything & go read Matt Webb on the computing that could have been, talking SAGE & multi-user workspaces[1]!

> Right now we’re in the era of personal computers. Collaboration, social use of tools, togetherness: all of these are hacks on top of something that, at its core, was designed for the individual first.

> But there’s a particular photograph of group computing from the 1950s, from before the PC was invented...

I wonder how much if at all Engelbart was aware of SAGE. Webb poonts out Engelbart got started working on computers out of SAGE.

> As an aside, the whole "meta-verse" craze might pull these ideas through to the mainstream again.

Heck yeah, right on. Even if the immersive modes fail, I really hope some embodiment survives. Constructing space & environments in which being & event can occur, manifesting processes... I have a lot of hope we can start to better ground & make real computing.

[1] https://interconnected.org/home/2021/12/21/sage https://news.ycombinator.com/item?id=31665364 (12 points, 10 hours ago, 2 comments) https://news.ycombinator.com/item?id=31535139 (14 points, 12 days ago, 1 comment)


I mean, have you played any massively multiplayer online game in the last 20 years?


Now we just need to start presenting computing inside these environments! Those projects of using Doom to kill processes... http://psdoom.sourceforge.net/


> In the original implementation of the program, 'pid monsters' could be killed not only by the program's user, but also by other 'pid monsters' and normal Doom monsters on the level. The reasoning behind this behavior was that "on very heavily-loaded machines, it is not uncommon for the OS to kill random processes." Unfortunately, the number of monsters in a given area must be depressingly small in order for them to avoid both intentional infighting and friendly fire. Since monsters would tend to kill each other off until only a few remained in the area, the user was severely hampered in the ability to orderly control processes on the machine. Therefore, the default behavior of psDooM is to ensure the player is the only character in the game who can wound and kill 'pid monsters'. This avoids accidental process deaths from monster infighting. Unfortunately, it doesn't prevent accidental process deaths from a user's poor aim. ;-} The original behavior of 'pid monsters' being as vunerable as other monsters may be enabled with a command line flag.

That's amazing lol


> The demo had multiple people working on the same computer remotely, with their own cursors and ability to do things on the system at the same time. Best we have now is web-browsers syncing data silos, or a one way "let this person control my mouse + keyboard".

Apple's multi-user support in it's "iWork" productivity apps actually comes very close to this, and it's a real shame that it's not as well known as it is. In fact Apple just demo'd improvements to it a couple of days ago at WWDC.

https://support.apple.com/en-us/HT206181

It works almost exactly like you're describing and frankly I think it's amazing and am surprised more people don't rave about it.


As a counterpoint (and not necessarily a counterargument), this sort of paradigm would be orthogonal to and perhaps incompatible with the merits of "personal" computing. Collaborating in this kind of multiuser workspace requires common knowledge of a shared interface that does not necessarily provide efficient workflow for all. If a person were to use my customized setup, they wouldn't know how to close a window, let alone how to open one - everything is optimized solely for my personal ease of use, almost always at the cost of familiarity.

One of the few reasons why I have Windows installed on my machine was so that my ex-girlfriend could use my computer when she needed to. Somewhat amusingly, I had to teach her how to access the arrow keys under a function layer of my 60% keyboard to select the correct bootloader entry in order to do that. That's two new concepts foreign to most average users just to do what they might consider "turn on the computer".


Your concern about tailored user interfaces also applies to disabled people who physically can't do things the same way as the rest of the group, e.g. blind people who need to use a screen reader and keyboard commands rather than look at the screen and use a pointing device.

Having said that, some implementations of collaboration don't have this problem. In most modern networked applications that are designed to be multi-user, each client runs its own instance of the UI, and the clients synchronize the underlying data. This scheme can accommodate different user needs much better than, say, using something like VNC to have a bunch of people watch a shared frame buffer and take turns sending input to a shared UI.


It's OSX-only, which makes it a non-starter for most orgs. We actually are 100% macbook, but we still use Google Docs since some people want to edit documents from their phone.


That's reasonable, but if the question is "why doesn't this exist" then the answer is "it does".

(For what it's worth, it's also available on any platform via the web app versions of iWork at icloud.com. I'm not saying this is necessarily practical, but, it's there.)


I think the point is that if this only exists in Apple World, then really it exists as a little, isolated island, and the entire point of having "anyone with a computer" able to play is lost.


Nothing will ever be "anyone with a computer".

It really all hinges on how you want to define "computer". There are some prerequisites like you need a network connection and display and input device(s). You also need some client software. In this case, the client consisting of a particular software stack doesn't seem that unreasonable.

If I can't play with you using the computer in my laser printer it doesn't mean the vision of collaborative computing still isn't here.


I think, for a desktop work environment, people would be happy if it worked, given a standard keyboard, mouse, and display, on iOS, Android, Windows, and OSX, with Linux support a bonus.

MS Office supports this, as does Google Docs, and apparently, so does iWork!


> As an aside, the whole "meta-verse" craze might pull these ideas through to the mainstream again.

I actually really agree with this hypothesis. People think the whole head mounted display thing is going to revolutionize interaction (and I do think eye contact will be a killer app)…

But I also think one of the biggest affects VR will have is to reset peoples’ expectations about 2D interfaces. I think once we get used to the idea that “data exists in space” it’ll trickle down into the 2D world and—50 years after it began was invented—we might see an alternative to WIMP.


Check out Croquet[1], they seem to be tackling the very problem you speak of.

[1]https://croquet.io/


Xorg has been able to handle multiple users in one session for quite some time. The problem, I think, is that most windows managers don't handle multiple focuses very well.

I vaguely remember some multi-user system for Windows as well, but IIRC it was more of an R&D thing.


>The fact computers have been so isolated from one another (even with internet apps) is a complete tragedy. Even sending large files is a pain in the butt to this day.

We don't have to really send anything anymore, I can just place a file in a shared location. I can even share an incredibly high resolution video on google by just making a link.

>Best we have now is web-browsers syncing data silos, or a one way "let this person control my mouse + keyboard".

I am now able to edit Excel documents and see people in the same file making updates live. I am not saying this is peak collaboration but I find it pretty impressive.

The meta-verse may have a chance (I'm pulling for it in some shape), but let's not act like we aren't making progress :)


Multiplayer Excel/Powerpoint may be the biggest advance in Office-tech in the last 20 years. Things like Google Docs allow collaboration in a way not seen since ... the Mother of All Demos.


The question/problem with the approach is: Who owns the commons, the Shared Space? Usually the answer is Not So Good.


Google Docs pretty much handles the multiple users working on the same document problem as well as I could imagine. Am I missing something here?


I had the change to meet him in SF. He was an inspirational guy, the only other person I heard speak who made you want to leave the room and start building something, anything was Woz.

But I am left wondering what could have been, what might have been. Nobody would fund him. Apple probably should have just given him a lab with a couple of employees and a modest budget. Who knows what he might have invented? He had that rare ability to see into the future. The world lost a good thirty years of his genius and not being a Valley insider I am left wondering why?


What do you mean? He was funded by SRI for a long time, and spent time visiting other people and spreading the word. I first met him when he visited PARC, probably for the upteent time but I’d just joined.

He was full of interesting ideas, not someone who just had one big insight long ago, and was great to talk to about your own work.

His memorial service was full of amazing people whom he’d met and worked with in the Valley over the decades. Mind boggling how that works — quite different from a fully remote world. We talk about WFH vs office, but less about the value of running into Doug engelbart or don knuth while walking down the street.


> His memorial service was full of amazing people whom he’d met and worked with in the Valley over the decades

I was at that service and several people spoke about how his ideas did not really get the backing they deserved, Ted Nelson not the least of them.

This makes more sense when you realize that what Doug really wanted was to transform the way people worked far more than he wanted to invent cool new tech, but while people were happy to have him as a member of staff he needed to be given freedom to play with organizations and no one was ready to try anything as radical as he had in mind.

I remember there was a period where Google was trying to make him a permanent fixture and he said in a private conversation something to the effect of it feeling like being surrounded by a bunch of young people who just wanted to admire him like a statue, not engage with his ideas.


> what Doug really wanted was to transform the way people worked

Yeah, people were at the center of his thinking, not machines. Again, crazy farsight by today’s standards.

> feeling like being surrounded by a bunch of young people who just wanted to admire him like a statue

This is hard: classic imposter syndrome makes ppl feel afraid to ask a dumb question.

> not engage with his ideas.

A separate problem; some of the drivers of this I touched on in another comment; would be interesting to read what you think.

I remember feeling that Ted was talking about himself, as he chronically does. Ted’s good, radical, and farsighted ideas didn’t get the attention they deserved either, but I feel a lot of that was self inflicted. And the amount they deserved might have been less than he thought.

Come to think of it, Ted should recast trans Lu soon as a web3 idea. His charge-for-everything approach would fit right in.


> I remember feeling that Ted was talking about himself, as he chronically does.

I'm quite confident he was talking about both Doug and himself. Ted definitely saw Doug as a kindred spirit with a great vision for how the world could be different who was ignored.


This 3-day old comment, quoting a description of his time at Tymnet, fits all too much what I'd expect of a starlings middle/late career, of talent being unsupported.

https://news.ycombinator.com/item?id=31633395


I contend that there are two Hard Problems in building shared digital places: Discovery and Enabling Serendipity. And of the two, Discovery is probably the easier.


How many places on earth can you meet people of the calibre of Engelbart on the street?


Our streets are digital in fully remote world. On HN for e.g. I've seen creator of DLang casually chimes in on a conversation. A big piece from past hits front page and the author simply joins and answers questions.

While we've evolved to value in-person interactions higher, let's not forget the fact that even non-sonic non-visual near-real-time conversation with actual creator would be simply impossible for someone sitting in a third-world country, like me :)


It really isn't the same. It is the random encounters in silicon valley. At a birthday party my child was attending, I had a great conversation with the head of AI from a self-driving company, a bio-chemist and a engineering manager trying to scale up mentorship at Apple. I've never had that in any other area I have lived.


> probably should have just given him a lab with a couple of employees and a modest budget. Who knows what he might have invented?

This is perhaps the defining sadness that most saps at me. This is a world of such potential that does such expensive vast things. I really wish there were more room & space for people to go off & explore, try testing new novel spaces.

Dont we have some surplus, to feed the promising with, to give them a couple allies to go boldly sally forth with?

The corporation as a model has such deficiencies, has such need for alignment & concensus & purpose. Humanity is just so much greater when there's some permissionless innovation, some real testing & pushing things just to find out, to try & see, to go further. Our models of labor & vue seem so unable to make small but long bets on so many bright stars.


Almost everything is FOR PROFIT NOW in the western system. This kind of thing is more likely to be happening in China. R&D departments have been downsized in most cases (maybe not in bio/pharma, which still has a long lifecycle). We have few Musks who can go out and do interesting things.


According to his personal wikipedia page (which are never entirely reliable):

> "Engelbart slipped into relative obscurity by the mid-1970s. As early as 1970, several of his researchers became alienated from him and left his organization for Xerox PARC, in part due to frustration, and in part due to differing views of the future of computing. Engelbart saw the future in collaborative, networked, timeshare (client-server) computers, which younger programmers rejected in favor of the personal computer. The conflict was both technical and ideological: the younger programmers came from an era where centralized power was highly suspect, and personal computing was just barely on the horizon."

Now both concepts have become more-or-less fully developed with a wide variety of small personal computers from RPis to desktop powerhouses, as well as rent-space-in-the-cloud client/server approaches. However, you do need to have a lot of faith in the overall power structure not going down some authoritarian rat hole to rely solely on the latter.

His ideas are certainly interesting and still relevant today:

> "Consider a group's Collective IQ to represent its capability for dealing with complex, urgent problems – to perceive and understand them adequately, to engage the stakeholders, to unearth the best candidate solutions, to assess resources and operational capabilities and select appropriate solutions and commitments, to be effective in organizing and executing the selected approach, to monitor the progress, to be able to adjust rapidly and appropriately to unforeseen complications, and so on."

https://dougengelbart.org/content/view/194/


Funny timing — the Future of Coding podcast (disclosure: hosted by myself, sponsored by Replit) just did an episode about Engelbart's Augmenting Human Intellect, perhaps offering a bit of the "more in-depth examination" Amjad was interested in.

https://futureofcoding.org/episodes/056

That paper (from '62) forms quite the contrast with MoAD (from '68). While the latter is quite specific, showing a lot of "features" that people tend to point to when citing Engelbart's influence, the former is arguably much more visionary. Doug had quite a lot more in mind than just pointing devices and video calling.


I listened to it (thanks mikewarot for the warning about the volume), and at some point two of you discuss something I agree with that I haven’t seen noted before when talking about Doug Engelbart:

> [... 1:17:32] Joe is writing down all of the things that you’re saying describing the system, and then he says that “it represents the clumsy phrasing and illogical progression of thought so typical of extemporaneous speech” [https://www.dougengelbart.org/pubs/augment-3906.html#3b5a] which A: is what Jimmy and I are doing right now, B: is what Doug Engelbart’s writing reads like anyways. His writing reads like clumsy and illogical progressions of extemporaneous speech, except that it’s like, worked to within an inch of its life. It is so overwrought, and yet at the same time clumsy and illogical, so that... you know... grrrr!

> [... 1:18:14] Doug is a bad writer, I don’t like Doug’s writing, I’ll say that, I have not enjoyed reading his writing. I enjoyed his fan fiction better than I enjoyed his technical manuals, it’s just not a pleasure to read. And it made me wonder, [...] did he feel the disconnect in quality between, like, Bush’s writing and his own writing? Did he feel like the quality of his writing needed to be formed into this very unappealing mold because of the context he was working in?

> [... 1:20:42] I do kind of agree with you, his writing is very awkward, right? It is hard to kind of get at the ideas that are in here, and you know, maybe there is something to it where he is... this is clearly, some of this is an intentional stylistic choice, right? [...] he wanted to write in this way for some particular reason or other, I think.


As one of my favorite books, I'm overdue for a re-read, but Waldrop's Dream Machines is a book about J.C.R. Licklider, but Engelbart definitely comes up again and again, and for sure, the Augmenting Human Intellect thread is highly present. At least we no longer reduce Engelbart to "the man who invented the mouse." But yeah, the not-exactly philosophical but big-idea-ed-ness, the frame-setting that folks had is such a valuable contribution. Engelbart had such a clear & sharp objective in mind, that he wrote about so early, & it's so compelling & humanistic a value. Few other examples of tech-workers across the decades seem to radiate with the clear moral & progressive clarity.

Thank you for this call out. This idea, of whether computers are here to mechanize processes that people fit into, or whether computers are tools to augment & enhance our individual agency, is a huge question.

Ursala Franklin has some good material on Technological Society. There's a couple different points of view she proposes, but I really enjoy the distinction she makes between Holistic technologies, where the capabilities of the user are expanded & opened, the user gains control, versus Prescriptive technologies, which relate more to macro-level control, to enforcing set-down processes[1].

[1] https://en.wikipedia.org/wiki/Ursula_Franklin#Holistic_and_p...


I almost blew my ears off when I started it. I couldn't find a volume control in the media player, and the little button on the right wasn't it. Please add a volume control to the media player.


Sorry about that! It's a player made by Omny, my podcast host. But I'm unhappy with them for other reasons too, so I'll be switching away soon — self-hosting, and using the native media player. So your volume slider wishes will be fulfilled. Thank you for the request.


Suggestion: abandon the player, let your subscribers choose whatever player they like, such as their own browser's builtin player, and just post a link to the media file, like an mp3, and/or stream via a common protocol all browsers support, like rtsp. No middleman and far less fuss.


I met Doug Engelbart in the late 90's during his reboot sessions at Stanford. Lots of luminaries spoke, including Ted Nelson.

I was there because I read every one of his group's papers, in the mid 80's, and wrote a version of Augment in MS-DOS and Turbo Pascal. I think I showed him a demo.

He invited me to attend informal meetings at SRI. I left after a few weeks because I couldn't figure out how it would result in a product. Had already went into deep personal debt, once before (about $50K in 1990), trying to market something similar.


Monetization is an extra constraint that makes software worse :(


Counter opinion was Nolan Bushnell, saying that the best way to test an idea was to put a coin slot on it. This was during the era of arcade machines.

But, then again, not sure how you put a coin slot on the Arpanet.


You don't happen to have that code lying around anywhere do you? I'd love to take a peek.


Ugh, am embarrassed by my coding style, from back then. The product still works on Windows XP, but is in storage in Iowa (am in California). Next time I visit family, will bring it out.


How actionable where the ideas in those papers? A few times I've read old essays and papers and it always seems like there's eventually one poorly defined or completely abstract portion that boils down to "magic happens here".


The most salient point was a hierarchy of discourse. I was working for a project management software company, at the time. So, it fit well within the mindset of a Work Breakdown Structure (WBS) or Bill of Materials (BOM). Usually activities were organized around WBS and BOM with resource leveling of resources (people) and materials. So, Augment filled in the documentation side for highly structured projects.

Am still working with core concepts: using a namespace hierarchy (akin to a WBS) and a directed graph (akin to activities). Except now, the activities are code.


Always glad to see this getting attention, I really enjoy computing history. There's a link to a video in the article, but it's part three of ten. Recently I came across this[0]. It's pretty much the whole thing in one video. It's long but it's well worth the watch. I was born in '68, so I can't remember the 60s, but I can remember the 70s and it's easy for me to understand why people felt Engelbart was, "Dealing lightning with both hands" at the Mother of All Demos.

[0]https://www.youtube.com/watch?v=yJDv-zdhzMY


And I just found this very recent biopic on Doug. Just watched 10 minutes so far, but it looks to be really well produced, with interview snippets with Alan Kay, Vint Cerf amongst others.

https://youtu.be/_7ZtISeGyCY


Cool, bookmarked, ty!


The main thing he wanted, and never got... was a group of people working together on some task unrelated to programming, with an agreed upon measure of productivity.

He wanted to see how much he could optimize their productivity over time with careful measurement and plain old trial and error.

It's never stated why funding dried up, but here's my suspicion: Nobody was willing to risk upsetting the traditional hierarchy with something that might obsolete management, for example. Nobody other than an angel investor, or group of old retired technologists can pull something like that off. It has to be someone with time, or f*ck you money. I have time.


Amjad's tribute is touching, and also be sure to read the Wikipedia page he links to:

https://en.wikipedia.org/wiki/The_Mother_of_All_Demos

I met Doug in the late 1970s when we worked in the same Tymshare office at 20705 Valley Green Drive in Cupertino. He had a little cubicle like the rest of us, in the far corner of the building behind the pine tree in the middle of this Street View image:

https://goo.gl/maps/JEZAqS6vqVAAfjP6A

We only talked a few times, but I could always see the sadness in his eyes from the shabby treatment Tymshare gave him. No real team, and not much encouragement to continue his mission.

Tymshare was quite a land of lost opportunities. Earlier in the 1970s they had a "programmable terminal" in a side office. I think it may have been a Datapoint 2200?

I found the programming manual and saw that I could write code to respond to the cursor keys, put characters on the screen where I wanted them, and then run Conway's Game of Life!

In my years of working there coding on Teletype machines, I'd never experienced anything like this.

Eventually my manager saw me working on my GoL program, and kindly but firmly said, "Mike, this looks like fun and all, but no one is ever going to want to do something like this. We have important business problems to solve. Back to work now!"

At one point they did consider doing something to take advantage of these newfangled "microcomputers". But they didn't pursue it very far, because they figured it might be a threat to their established timesharing business.

And they were right!


A piece I wrote on this a couple years back: https://thenewstack.io/49-years-ago-douglas-engelbart-predic...


I’m always left amazed, confused and disappointed when I read about this.

How did something so monumental get built and demonstrated, but then not get followed up on for more than 30 years?

Were there humongous caveats involved in the demo that made it impossible to leave the lab even for major corporations?


The caveats were that 1 - you needed specialized research hardware that basically nobody had and 2 - people didn’t understand how revolutionary computers were.* Even in the late 80s the general consensus was that personal computers would be just used by specialists because “no executive can type, or would want to”

And the labs in those days were pushing forward and “inventing the future”. We had network-native systems: not just computers with network adaptors, but cloud native software with mail, filesystem etc transparently available over the net. In fact where did the “cloud” concept come from? A TCP design document in the 70s.

When you’d look outside the lab it felt like the past. DOS was the state of the art. And you’d visit your friends at different institutions and see different approaches to address the same problems.

These days I never get that “this is the exciting future” feeling. The old nexus corporate labs (PARC, IBM, Bell Labs (RIP) and others) aren’t dominant as they used to be, but when I visit their successors (e.g. Google) the research doesn’t have that same jaw-drop impact. Not because I’m jaded, but because nothing appears to have come back through a time machine any more. Perhaps because computing went mainstream and then the constraints appeared?

* and are: I think people still haven’t grasped how transformational they will be once we finally get computers to work and be useful. We’re still just puttering around at the base of the mountain.


You requested my thoughts here in a different comment :)

> The caveats were that 1 - you needed specialized research hardware that basically nobody had and 2 - people didn’t understand how revolutionary computers were.* Even in the late 80s the general consensus was that personal computers would be just used by specialists because “no executive can type, or would want to”

I agree with this completely. The teleconferencing, for example, required a big string of trucks with equipment to transmit the signal IIRC. An incredible tech demo but not even close to something that could be productized.

> Not because I’m jaded, but because nothing appears to have come back through a time machine any more. Perhaps because computing went mainstream and then the constraints appeared?

I think another way to frame this is that early computers offered a clear paradigm shift, the shift happened, and now everything on the near horizon is within that paradigm. This true even for some of the more exciting projects like Bret Victor's dynamicland (I'm not sure it will actually go anywhere, but I'm happy to see someone carrying the torch of the multimedia dreams that died in the 90s).

Personally, I think biotech has the best potential for the next "it's the future!" style revolution.


I’m a big Arthur C Clarke fan, and because his future imagined maybe 80s computers flying in (hopefully) 2100s rockets.

SpaceX is the company I look at as clawing the future back to us piece by piece.


I think it’s fair to say that a viable commercial market for these technologies was inconceivable. In 1968 the photocopier was only 10 years old, the Boeing 747 was brand new, the moon landings had not yet happened and the microprocessor did not yet exist, nor did Moore’s law.

As for caveats, Sutherland was using a $1.3m computer (2022 money) of which only 50 existed, and two TV microwave links ($$$) which are impossible to scale.

Still, we can dream.



He was decades ahead of his time, as were the folks at PARC, in terms of something that could be turned into a profitable business or consumer product. The time horizon was just too long for the most powerful ideas to really take root in the industry. As a result, many of the 'big picture' concepts were discarded and shorter term goals that could be achieved in a few years time were developed instead. When management gets compensated in terms of quarters to a small number of years, that's the time-frame they tend to focus on.


> How did something so monumental get built and demonstrated, but then not get followed up on for more than 30 years?

Same as with moon shots. Abandoned.


But the goal of that was fully achieved - to put a man on the moon and return him safely to Earth, before the Russians did.


I'd like to to leave a link to Doug's 1962 paper:

https://www.dougengelbart.org/pubs/augment-3906.html

It's worth noting that despite the historical magnanimity of the 68' demo that the real, "meat and potatoes" of Doug's work is still the operating philosophy he developed that allowed SRI to dream up all this groovy shit and make it work. It's also worth remembering Bill English who was the, "director" of the demo who sometimes doesn't get enough credit.


> Bill English who was the, "director" of the demo who sometimes doesn't get enough credit.

Yes, as Doug Engelbart said when talking about the demo: “but anyway it worked, and the main reason it worked is because Bill English is a genius”.

https://news.ycombinator.com/item?id=24064902


And behind the camera, Stewart Brand


This is from Wikipedia article about the Mother of All Demos:

>The 90-minute presentation demonstrated for the first time many of the fundamental elements of modern personal computing: windows, hypertext, graphics, efficient navigation and command input, video conferencing, the computer mouse, word processing, dynamic file linking, revision control, and a collaborative real-time editor

If those things were possible in 1968, why did most of us started to use them just in the '90?

Is normal to have that big of a gap between a technological demonstration and the technology becoming available?


His team pretty much all moved on to Xerox where they've released Xerox Alto just five years later, which then influenced Apple's engineers, who took the inspiration from Alto and channeled it into Lisa and the OG Macintosh, all of which were still prohibitively expensive for home usage.

So it's not like people forgot about it for a couple of decades, there's a clear line from that demo to where we are now.


To the extend that the Alto had a chord set as an input device.


Short answer - the PC won and we started at the bottom and worked up.

Had the PC not won it's possible that something like multi-user systems would have become the main thing (which we now basically have, but we call it "the cloud").


I must, with respect, disagree that "the cloud" is the same (or even close to the same) thing. Indeed, you use the acronym "PC", which, I'll remind the crowd stands for "Personal Computer". It has recently dawned on me that the thing is not (for most people, ordinary, 'non-techie' people) "Personal", and is scarcely a "Computer" in the sense of Turing-machine/Universal computing device. Allow me to unpack that:

"Personal" is the pair of shoes I wear every day, even though they're years past their use-by date, are ugly, smell funny and the soles are worn. For you it might be the jumper knitted for you by your Nan. It's the objects we use that, over the years, mould themselves to our personal idiosyncrasies, habits, preferences and style. Our computers scarcely have that (particularly in the Apple and MS worlds of Personal Computing!) and hardly live long enough to acquire personalisation/personality. Not to mention, too, it's quite hard to do. Example: my (Linux, KDE) box is pretty happy for me to define custom hot-keys for all sorts of things, from firing up frequently-used apps (I scarcely ever use the desktop menu these days) to typing my email address. But could my (tech-clueless) brother do that? Extremely unlikely. And it takes several non-intuitive steps to make a new hotkey...

"Computer": How programmable is my phone? (I've done Android dev work. I don't ever wish to do it again.) How scriptable? How malleable in casual ways? Not at all. It's Google's device much more than it's mine. It's a sort of locked-down surveillance television, not a general-purpose computer programmable-by-me (except in a highly specialist and narrow sense.)

Much of the blame I put at the doorstep of Visual Basic which reduced the scope and utility of direct-manipulation interfaces into a terribly narrow, constrained version of "paper forms on an electronic screen". It put the vision back by at least a decade. Maybe several.

Finally, though, I have hope we might emerge from that. There's a bunch of us working on it.


This is state of the art stuff that would have been prohibitively expensive for most people. It's kind of like how a lot of home automation stuff was theorized back in the 90s and early 2000s but it only became mainstream recently.


After reading Bardini's excellent biography I found that Dr. Engelbart's unfortunate involvement with Werner Erhard and est was the beginning of his journey down the wrong path. It's hard to say if his ideas around Collective IQ, CoDIAK and DKR were too obtuse for practical application or the world just caught up with his ideas. In any case, it seems centralizing human knowledge isn't going to save the world after all.


I never knew of it until somebody mentioned on a newsgroup - amazing for the time. So thankful for these early pioneers to try out new ideas before it became a commercial success later on.


when did 'demo' begin use as a shorthand for demonstration?


At least 1936, according to this unsourced page: https://www.etymonline.com/word/demo


I actually thought this would be about a demoscene production.


> as a lone inventor genius a la Tesla

The de-facto deification is pervasive.


Now that's a worthwhile Wikipedia page.


Matt Ridley famously quoted that "no one knows how to build a computer mouse" Well, this guy did.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: