There is something about the computer — the computer has almost since its beginning been basically a solution looking for a problem.
People come to MIT and to other places, people from all sorts of establishments — the medical establishment, the legal establishment, the education establishment, and in effect they say, "You have there a very wonderful instrument which solves a lot of problems. Surely there must be problems in my establishment — in this case, the educational establishment, for which your wonderful instrument is a solution. Please tell me for what problems your wonderful instrument is a solution."
The questioning should start the other way — it should perhaps start with the question of what education is supposed to accomplish in the first place. Then perhaps [one should] state some priorities — it should accomplish this, it should do that, it should do the other thing. Then one might ask, in terms of what it's supposed to do, what are the priorities? What are the most urgent problems? And once one has identified the urgent problems, then one can perhaps say, "Here is a problem for which the computer seems to be well-suited." I think that's the way it has to begin.
This was published in 1985, back when home computers were still relatively new and publications like Creative Computing were still on the stands.
I mention CC in this context for two reasons. One, it was originally devised as a journal for the discussion of the role of computing in educational contexts — the whole "what problems is it suited for?" question, rather than the whole "let's take this thing computers do and make it 'educational'" concept.
The other reason, which grew out of the first one, is how CC always maintained what I guess could be called a humanistic philosophy of computing — the idea that we should look at the problems we have in society, education among them, and see where computers are a good fit to problems on a case-by-case basis.
This stands in pretty stark contrast to what we have now, where we have a surfeit of solutions-without-a-real-problem, like bitcoin. I don't mean to imply that the underlying technologies are useless, only that they seem to be either ultimately redundant or grossly misapplied.
I suspect this kind of thinking is a long-ingrained habit in Western society, but in American society in particular. It's the reflexive thinking that there exists a technical solution for every social problem, or that all social problems are merely a technical problem in disguise. Put a broad enough shotgun spray of technical solutions out there, and your problems will magically evaporate under such a rain.
There is a good impulse under all this. It's the idea that human beings are problem-solving creatures, and that they are not helpless against their surroundings. I am a child of this thinking, and I would be foolish to attack it, because my world was built from it and I benefit from it every second of every day. But it's imperfect and shortsighted, and it is no insult to realize it has to be treated with skepticism by the very people who make the most of it.
What gets me is how the most recent incarnation of all this, the Silicon Valley ethos I guess you could call it, has not changed any of it at heart but simply put a friendly coat of paint on it. It's always easier to accept what amounts to regressive social conditions when they seem new or feel like they're in your best interest.
When there is talk of finding technical solutions to problems that merit them, they are rarely, if ever, part of a larger program of social change, because actual social change is difficult to enact and measure. Gadgets are comparatively easy. Look how the world flooded itself with smartphones. Look how in the span of less than a generation we remodeled a significant chunk of our social interaction and discourse around it.
I guess this is all my long-winded way of saying, we need something like Creative Computing again. Most journalism and general-interest writing about computing is either how-to stuff, or the wrong kind of issue-oriented discussion. There needs to be work that looks first at the world around us and then asks, what can we specifically help here with computing or technology? What would be truly curative, and not just a pave-over for a fissure that runs to the roots?
But the cynical side of me thinks we now live in a world where computing is the primary assumption about modern life, that we now assume our lives have to be molded to fit the technology we originally developed as an aid to those lives, and that any questions about how to put life first and technology second are being asked decades too late.
New York City
Other Lives Of The Mind