I just returned from teaching a seminar on Phil of CS and this posts pops up - great timing! I made my students read Turing's 1936 paper, which probably no aspiring CS student would ever get subjected to otherwise. Much has been written about that paper, but what stands out is that Turing is not really interested at all in practical computations.
My last note on that paper was:
"Perhaps Turing's notion of computation appeals to us because it rests on a purely theoretical analysis? What about all the people who have _constructed_ computing machines in the centuries before him? Should we place more emphasis on the engineering aspects of CS? (neither Leibniz, nor Babbage, nor Zuse knew about TMs!)"
I think I need to add political aspects to the last question now too.
Michael Rapaport's book on the philosophy of computer science has a chapter devoted to "slow reading" of Turing's 1936 paper. It's excellent and full of interesting historical asides, such as the fact that Turing does not talk about the Halting Problem -- that was Martin Davis' later gloss. The reason is that Turing wants to work, among other things, with machines that output computable reals, which have infinite decimal expansions. Turing's dichotomy is between machines that actually output the result (or keep outputting successive symbols of the result, which Turing called "symbols of the first kind") vs. the ones that may print out finitely many output symbols and then either halt or keep printing out what Turing called "symbols of the second kind." He called the first type of machines "non-circular" and the second type "circular."
As it happens we were reading Rapaport in parallel. The chapter is really great, although he commits to the Turing-the-father-of-CS narrative too. Btw. if you are interested in a recent appreciation of Turning and the halting problem, you'll absolutely have to read Hamkins and Nenu's paper "Did Turing prove the undecidability of the halting problem?".
There's some justification in referring to Turing as one of the fathers of theoretical CS, but that's just a particular subfield of CS. The construction of the earliest computing machines took place without any knowledge of Turing's work; one could however give some credit to Shannon for his MS thesis on switching circuit implementations of Boolean logic.
One of the first texts that tried to teach models of computation to undergrads was Minsky's _Computation: Finite and Infinite Machines_, and there he is very clear that Turing's contribution was imported into the study of computing as a formalization of what an "effective procedure" is. The book starts with finite-state machines and McCulloch-Pitts nets, TMs appear later when infinite machines are introduced.
Hi Nico, I found the course description but couldn't find the syllabus for this course, could you share it somewhere? As a philosophy undergrad and someone interested in this question, I'm curious!
The possibility that "science be a post-hoc rationalization of technology" is definitely often more real than we are happy to admit!
In the shameless-self-promotion corner: I wrote a short post about Leon Cooper and the commercialization of neural network technology in the early 1990s which I think offer another historic case study for some points discusssed here
(These were the days were you first had to get a physics nobel prize and _then_ switch to do neural network stuff, rather than doing neural networks and then get a physics nobel prize...)
I was part of one of the first cohorts of PhD students at the MIT Media Lab, and one of the jobs they gave us was to try to come up with some sort of intellectual core for this new field (dubbed Media Arts and Sciences). This was a fairly impossible task although we took it seriously (eg, maybe its a synthesis of signal processing, perceptual psychology, and McLuhanish media theory). But it turns out the field didn't have much in the way of intellectual coherence, it was more a random assortment of cool stuff that this particular collection of faculty was into and was fundable. That's OK I guess, but it makes CS look like a model of intellectual rigor in comparison. 30 years later and I don't think it's any more coherent. That's kind of a shame because the territory it is about (the confluence of technology with human activity and thought) is kind of important.
I started my PhD at the Media Lab in 2000, and it certainly wasn't coherent then either. But there was a magic to the Lab's lack of disciplinary grounding. The discipline could be whatever you wanted it to be. You could imagine a better discipline. Though this didn't necessarily get you best paper awards at specific ACM conferences, it might lead to a breakthrough new technology, a major art exhibition, a groundbreaking science paper, or a MacArthur Award.
I took it for granted while I was there, but there is something very special about the media lab, and I always wish academia were more open to adopting its unabashedly multidisciplinary "Montessori Grad School" approach to inquiry.
Oh cool! Yes there was a lot of magic going on there. I certainly enjoyed the interdisciplinary freedom (although I probably could have done with more discipline, but that's my problem).
Their all-corporate funding model was also craven and cynical, but at least they avoided the controversy of indirect costs.
Marvin Minsky (who Maxim mentioned up thread) once said, "At some point I realized it was easier to fundraise from industry than the government. That is when I became a capitalist."
In light of his later life fundraising, perhaps this should now be read as a cautionary tale.
Minsky was one of my advisors actually, fortunately I was gone long before the Epstein era. Complicated feelings. My research was was mostly supported by Apple and I had no problems with that at all, but I did conclude at one point that the lab's real innovation was in funding models.
Cool thanks! I never had much of a problem with the demo-or-die attitude honestly. It’s really the best way to convey truly new UI ideas. My own reminiscing about the lab https://hyperphor.com/ammdi/Media-Science
George Dyson gave a talk where he said that a) survivors of von Neumann's IAS program told him that they understood the relevance of Turing's LMS paper to the machine that they were building and b) Dyson verified that that volume of The Proceedings in the IAS library was well-read, in contrast to the other volumes, which were pristine. Unfortunately I can't find the video of the talk any more.
I think shifts in undergraduate attitudes played a role as well.
For a long time the CS major might have been partially job training, but those jobs were for weird nerds and normal people mostly didn't want to be involved. With these people, you could get away with a certain amount of non-applied course material, and even some theory.
Sometime in the 2000s (pre-2010 in California, post-2010 on the East Coast), there was a change. There was always the list of "finance/consulting/med school/law school" as the socially acceptable career choices for an elite college student. "Tech" got added to the list, as a prestigious target job for all the socially normal people. Naturally many of them wanted to major in CS.
But this group of people just want the job, and aren't willing to put up with distractions from getting the job. So there's a lot more resistance to non-industry-relevant coursework from undergraduates now, in addition to all the pressures on research.
I just returned from teaching a seminar on Phil of CS and this posts pops up - great timing! I made my students read Turing's 1936 paper, which probably no aspiring CS student would ever get subjected to otherwise. Much has been written about that paper, but what stands out is that Turing is not really interested at all in practical computations.
My last note on that paper was:
"Perhaps Turing's notion of computation appeals to us because it rests on a purely theoretical analysis? What about all the people who have _constructed_ computing machines in the centuries before him? Should we place more emphasis on the engineering aspects of CS? (neither Leibniz, nor Babbage, nor Zuse knew about TMs!)"
I think I need to add political aspects to the last question now too.
Michael Rapaport's book on the philosophy of computer science has a chapter devoted to "slow reading" of Turing's 1936 paper. It's excellent and full of interesting historical asides, such as the fact that Turing does not talk about the Halting Problem -- that was Martin Davis' later gloss. The reason is that Turing wants to work, among other things, with machines that output computable reals, which have infinite decimal expansions. Turing's dichotomy is between machines that actually output the result (or keep outputting successive symbols of the result, which Turing called "symbols of the first kind") vs. the ones that may print out finitely many output symbols and then either halt or keep printing out what Turing called "symbols of the second kind." He called the first type of machines "non-circular" and the second type "circular."
As it happens we were reading Rapaport in parallel. The chapter is really great, although he commits to the Turing-the-father-of-CS narrative too. Btw. if you are interested in a recent appreciation of Turning and the halting problem, you'll absolutely have to read Hamkins and Nenu's paper "Did Turing prove the undecidability of the halting problem?".
There's some justification in referring to Turing as one of the fathers of theoretical CS, but that's just a particular subfield of CS. The construction of the earliest computing machines took place without any knowledge of Turing's work; one could however give some credit to Shannon for his MS thesis on switching circuit implementations of Boolean logic.
One of the first texts that tried to teach models of computation to undergrads was Minsky's _Computation: Finite and Infinite Machines_, and there he is very clear that Turing's contribution was imported into the study of computing as a formalization of what an "effective procedure" is. The book starts with finite-state machines and McCulloch-Pitts nets, TMs appear later when infinite machines are introduced.
Hi Nico, I found the course description but couldn't find the syllabus for this course, could you share it somewhere? As a philosophy undergrad and someone interested in this question, I'm curious!
Sure, I just uploaded it to https://philo.hlrs.de/people/nico-formanek/#teaching.
For the coming courses I'll try not to put anything behind our uni's "paywall" anymore.
The possibility that "science be a post-hoc rationalization of technology" is definitely often more real than we are happy to admit!
In the shameless-self-promotion corner: I wrote a short post about Leon Cooper and the commercialization of neural network technology in the early 1990s which I think offer another historic case study for some points discusssed here
https://open.substack.com/pub/liorfox/p/for-more-information-call-intels
(These were the days were you first had to get a physics nobel prize and _then_ switch to do neural network stuff, rather than doing neural networks and then get a physics nobel prize...)
I was part of one of the first cohorts of PhD students at the MIT Media Lab, and one of the jobs they gave us was to try to come up with some sort of intellectual core for this new field (dubbed Media Arts and Sciences). This was a fairly impossible task although we took it seriously (eg, maybe its a synthesis of signal processing, perceptual psychology, and McLuhanish media theory). But it turns out the field didn't have much in the way of intellectual coherence, it was more a random assortment of cool stuff that this particular collection of faculty was into and was fundable. That's OK I guess, but it makes CS look like a model of intellectual rigor in comparison. 30 years later and I don't think it's any more coherent. That's kind of a shame because the territory it is about (the confluence of technology with human activity and thought) is kind of important.
I started my PhD at the Media Lab in 2000, and it certainly wasn't coherent then either. But there was a magic to the Lab's lack of disciplinary grounding. The discipline could be whatever you wanted it to be. You could imagine a better discipline. Though this didn't necessarily get you best paper awards at specific ACM conferences, it might lead to a breakthrough new technology, a major art exhibition, a groundbreaking science paper, or a MacArthur Award.
I took it for granted while I was there, but there is something very special about the media lab, and I always wish academia were more open to adopting its unabashedly multidisciplinary "Montessori Grad School" approach to inquiry.
Oh cool! Yes there was a lot of magic going on there. I certainly enjoyed the interdisciplinary freedom (although I probably could have done with more discipline, but that's my problem).
Their all-corporate funding model was also craven and cynical, but at least they avoided the controversy of indirect costs.
Marvin Minsky (who Maxim mentioned up thread) once said, "At some point I realized it was easier to fundraise from industry than the government. That is when I became a capitalist."
In light of his later life fundraising, perhaps this should now be read as a cautionary tale.
Minsky was one of my advisors actually, fortunately I was gone long before the Epstein era. Complicated feelings. My research was was mostly supported by Apple and I had no problems with that at all, but I did conclude at one point that the lab's real innovation was in funding models.
If you're interested, I wrote a bit about my experience at the Media Lab in this post: https://www.argmin.net/p/demo-or-die
Cool thanks! I never had much of a problem with the demo-or-die attitude honestly. It’s really the best way to convey truly new UI ideas. My own reminiscing about the lab https://hyperphor.com/ammdi/Media-Science
George Dyson gave a talk where he said that a) survivors of von Neumann's IAS program told him that they understood the relevance of Turing's LMS paper to the machine that they were building and b) Dyson verified that that volume of The Proceedings in the IAS library was well-read, in contrast to the other volumes, which were pristine. Unfortunately I can't find the video of the talk any more.
If you find it, please send it my way!
I think shifts in undergraduate attitudes played a role as well.
For a long time the CS major might have been partially job training, but those jobs were for weird nerds and normal people mostly didn't want to be involved. With these people, you could get away with a certain amount of non-applied course material, and even some theory.
Sometime in the 2000s (pre-2010 in California, post-2010 on the East Coast), there was a change. There was always the list of "finance/consulting/med school/law school" as the socially acceptable career choices for an elite college student. "Tech" got added to the list, as a prestigious target job for all the socially normal people. Naturally many of them wanted to major in CS.
But this group of people just want the job, and aren't willing to put up with distractions from getting the job. So there's a lot more resistance to non-industry-relevant coursework from undergraduates now, in addition to all the pressures on research.
Is there a reading list for the course mentioned in your footnote?
A subject of a future post.