new is always better
New is always better, right? The new iPhone is better than the old one. The updated app is better than the old version. A new programming language is better than any previous one. Just look at all the new features, possibilities, and enhancements!
This fact seems so deeply ingrained that we seldom spare a thought for old things, and when we do, we get nostalgic about how simple and inefficient things were back then. Remember the days when you visited a friend with a dozen floppy disks just to copy that one hot, new game?
We have it so much better these days.
And as it is so much better today than it was years ago, we don’t need to waste a single thought about our history in computing. I find it interesting that most other subjects scrutinize their history, while computing mostly ignores the past, thinking that new is always better.
When you study computer science, you’ll probably start with low level computing, like assembly or machine code. Soon you will learn a higher-level language, like C, or move on to object-oriented programming in Java, and if you’re lucky you will learn about more modern languages like Python, JavaScript or Go. You won’t stop there of course, learning about the recent frameworks, from Django to React or Vue. Maybe you even jumped in at some random point and moved onwards from there.
There are a couple of interesting things to note.
First, there is always a myriad of reasons why a newer approach is better than the old one, and why we don’t do things the old way anymore. Every programmer can describe in detail all the advantages and reasons why this is better than that, and why the new approach is the only way to tackle the future. You know: memory management, less bugs, faster development cycles, less code, better performance — you’ve heard it all before.
Secondly, this line of thinking is always exactly that: a line. A linear progression. We have old, and we have new. There is nothing in between, no crossroads and no other pathways, no different ideas or different ways to look at things. Maybe you’ve heard of some connected ideas, but they are old, and therefore uninteresting.
Here’s the problem: when we learn that new technology is always better, we imagine this is the only way technology can be better. We lose sight of the fact that there are many ways we can really innovate. The mantra might actually be doing more harm than good.
We learn to think about computers in one particular way. We burrow into this way of thinking and figure everything out. And then we forward all this knowledge to future generations, thinking we already have everything figured out. It’s really hard to break out of this line of thinking.
If you’re never exposed to new ideas and contexts, you assume you know what you’re doing. If you are taught there is only one way to think about computers, you don’t imagine there are other ways to think about them. We celebrate alleged innovation, having fleshed out all the details, improved, and optimized everything a computer has to offer.
Engineering is all about optimization, and you can’t optimize without being firmly anchored to the context you’re in. That anchor makes innovation and creativity so difficult to achieve. When we grow up with dogma, it’s really hard to break out of it.
If you read about the history of computing, and let yourself be open to other ways of thinking, you will become thoroughly dissatisfied with the limitations of every modern computer.
Reading about Vannevar Bush’s ideas on lightning-fast knowledge storage and retrieval will make you want to smash your computer against the wall. Looking at Douglas Engelbart’s work on augmenting human intellect and collaboration will urge you to throw every Google Doc and Zoom session out the window. Becoming familiar with Alan Kay’s notion of the future of computing will make you dump your iPad next to the other children’s toys. Learning about Ted Nelson’s vision of a repository for the world’s knowledge will make you despair of the world wide web.
And those are just a few examples.
The early days of computing were so interesting because nobody back then was a computer scientist. Everybody came into computing with different backgrounds, interests, and knowledge of other domains. They didn’t know anything about computers and therefore tried everything they could think of to solve their problems.
Today, each hype cycle gets pushed to the absolute maximum, until it starts failing to deliver on promises, only to be replaced by the next buzzword cycle.
AI? Blockchain? Driverless cars? I mean, what else? All those concepts sound smart and futuristic, but the people pushing those haven’t the faintest clue what they mean for society, or how to go about making any of them actually happen.
But they do it anyway because new is always better.
We should be better than this. Or in Douglas Engelbart’s words:
These days, the problem isn’t how to innovate; it’s how to get society to adopt the good ideas that already exist.
Want more ideas like this in your inbox?
My letters are about long-lasting, sustainable change that fundamentally amplifies our human capabilities and raises our collective intelligence through generations. Would love to have you on board.