In the last post, I quoted Donald Knuth crediting George Forsythe with coining the term computer science. Forsythe, the founder and first department chair of Stanford University's Computer Science Department, first proposed this disciplinary naming in a lecture at Brown University in 1962.
Brown was an early mover in computer science, and having established its University Computing Laboratory, the Applied Math department partnered with IBM to sponsor a lecture series on “Applications of Digital Computers.” The series included a lecture on AI by Herb Simon and a lecture on using computers to solve open number theory problems by DH Lehmer. Forsythe, interestingly, was the only talk that did not focus on an application area. Instead, he lectured on “Educational Implications of the Computer Revolution.”
Forsythe’s lecture argued for the establishment of new academic departments. Though he credits the influence of Fein on his thinking, his presentation is more eloquent and polished. Forsythe was well-versed in the ins and outs of academic lobbying. Given the ultraconservatism of academia, you can’t start a department without a revolution. Forsythe’s lecture thus sets the stage with a sentence that could begin any introspective essay about computers after 1950:
“Those of us who work with automatic digital computers suffer from a certain megalomania. We consider that we are not merely working in an area of great importance, we insist that we are instruments of a revolution, the Computer Revolution… In 1945, the fastest generally available computer was a person with a desk calculator, averaging a multiplication about every 10 seconds. In 1962, anyone with the money can obtain a computer capable of multiplying in 0.000025 second. This acceleration by a factor of about half a million, or between 5 and 6 orders of magnitude.”
Scaling laws, everyone! With the benefit of hindsight, we know this exponential trend would continue for decades, though at a somewhat slower rate. Forsythe’s growth was over a factor of six every two years. With our Moore’s Law-esque doubling every two years, we have only been able to eke out 14 orders of magnitude in floating point operation speed since then.
Forsythe then lists some of the potentially revolutionary applications of computers: language translation, medical diagnosis, OCR, and speech-to-text. And assuredly, we’ve made progress in all of these fronts. He also asserts, "One key property of computers is the automation of routine decision-making.” However, he doesn’t pontificate about the loftier goals of AI that were bandied around at the time by MIT faculty.
Forsythe didn’t need to speculate about science fiction. The applications he listed were impressive and wide-reaching enough to argue that we needed to educate people about how to use computers, how to advance computing technology, and how to understand the potential impacts of computers on society. He estimated that there were around 5000 computers in the US in 1962, and hence, like Fein, he notes the demand for training professionals. Moreover, Forsythe sees how everyone will need to understand how to use computers:
“The student of social sciences now needs as much technical background in computing as does his brother in physical science. In fact, research in social science is one of the biggest generators of massive files of data. Dealing with them gracefully may require considerably more computer skill than dealing with scientific problems.”
Given the revolution and need for education, Forsythe argued that universities needed to step up. Every university should have a computing center, and that center should serve the entire campus. It would be equally valuable to the students planning to work in the computer industry as those who would not. “Our students of engineering, business, and science should learn how easy it can be to use automatic computers for their homework.” Sometimes you have to be careful about what you wish for.
Forsythe thus describes his visions for a curriculum. He gave the syllabus for his introductory programming class. The initial offering at Stanford had 140 students.1 The topic list included Boolean circuits, assembly language, the BALGOL programming language, and elementary algorithms in numerical analysis. The class also leaned heavily on labs, with all students sharing time on a Burroughs 220. Each student wrote five programs, and they were autograded. The assignments included solving polynomial equations and sorting numbers. In addition to this practical training, Forsythe also covered history and sociology of the field, as he argued, “The vast majority of university students must realize the intellectual and sociological implications of a computer world. They must contemplate what computers mean for man’s thinking, for his employment, and for his social organization.”
Beyond the coursework, there was the question of the research discipline. This is what Forsythe calls “computer science,” though he notes, “It is known also as communication science or synnoetics.” I’m sorry, but I must start using the term synnoetics. Too good. And even better, synnoetics also sprang from the imagination of Louis Fein.
Forsythe’s version of synnoetics includes the topics of Programming, Numerical Analysis, Automata Theory, Data Processing, Business Games, Adaptive Systems, Information Theory, Information Retrieval, Recursive Function Theory, and Computer Linguistics. He envisioned faculty having joint appointments with departments of mathematics, physics, psychology, philosophy, medicine, business, engineering, and law. “One of the significant roles of computer science should be to provide an academic clubhouse where interdisciplinary stimulation can counteract the well-known effects of specialization.”
Sadly, this cross-cutting, interdisciplinary vision didn’t pan out. I wonder about this a lot. Why is it that academics perpetually advocate for fewer boundaries and more multidisciplinary playgrounds, yet end up with narrow definitions of technical depth with ossified disciplinary boundaries? Those faculty who moved into the new synnoetics departments ended up hiring more faculty like them, imprinting the discipline with the predilections of the first movers. This is why optimization research remains core at the University of Wisconsin, and numerical analysis remains part of CS at UC Berkeley. The signal processing, information theory, and control faculty stayed in their engineering departments. As did the computational researchers in physics, medicine, and law. Because of departmental jockeying for resources, boundaries were raised and purviews were narrowed.
My favorite part of Forsythe’s vision is its expansiveness in viewing the university as an agile place for training, inquiry, and knowledge discovery. I don’t know what the future holds for the academy, and it’s easy to be pessimistic and cynical. But if you want a positive case for our future, Forsythe’s 63-year-old lecture isn’t the worst place to start.
This class has only increased in size by one order of magnitude. I wonder what Forsythe would make of the failed exponential progress in the reach of the Stanford education.
I have in front of me Alexander Lerner's 1967 _Foundations of Cybernetics_ (Начала кибернетики in the original Russian), which more or less instantiates Forsythe's synnoetics program. It covers dynamical systems, signals, regulation, optimal control, automata and Turing machines, information theory, computation, adaptation, games, learning and pattern recognition, large-scale systems, operations research, neurophysiology of the brain, and reciprocal relations between humans and machines. Throughout the book, there are potted biographies of key thinkers, including Wiener, Lyapunov, Turing, von Neumann. I cannot think of a similar text in the Western literature at that time. (Incidentally, Lerner was Vapnik's PhD advisor at the Moscow Institute of Control Sciences.)
I wonder how the autograding you mentioned of the BOLGOL programs was done. Wonderful piece. Thanks for sharing this. Silos are normative for humans, is my suspicion.