The book "Structure and Interpretation of Computer Programs" by Abelson and Sussman has this to say in the preface to the first edition:
"The computer revolution is a revolution in the way we think and in the way we express what we think. The essence of this change is an emergence of what might best be called *procedural epistemology* -- the study of the structure of knowledge from an imperative point of view, as opposed to the more declarative point of view taken by classical mathematical subjects.
Mathematics provides a framework for dealing precisely with notions of "what is." Computation provides a framework for dealing precisely with notions of "how to.""
This statement has always bothered me. Seemed to me that mathematics and mathematicians were not being given their due. An exchange of ideas with Professor Tim Poston has helped me clarify my position vis a vis this passage.
1. The revolution is not "in the way we think" but in the technology that became available to express what we think. Mathematicians have always bothered about procedures. Whether it is a procedure in Euclidean geometry or a procedure to solve an equation. In fact, Knuth says somewhere that the history of number theory can be seen as the history of algorithms. Gauss's work and Galois's work were also motivated by an interest in procedures.
However, as Professor Tim Poston pointed out to me, mathematicians did not emphasize procedures as much as they did the statements they arrived at using the procedures. Perhaps this was because mathematicians felt that they were "discovering" an area of (Platonic) reality and their statements represented "truths" about this reality. The procedures that led them to these truths were not considered to be important.
This viewpoint was gradually eroded starting with the discovery of non-Euclidean geometries and climaxing with the work of Godel.
2. The problems with the second sentence in the quotation is about the use of the words "emergence" and "knowledge". If the claim is that "procedural epistemology" *began* in the 20th century as the aftermath of Turing's work, then that is difficult to accept. Also it is difficult to accept that either mathematics or computing is studying "the structure of knowledge". Seems to me that both mathematics and physics have given up any claims they might have had that they were discovering knowledge. They are just playing with models and some of them seem to fit some aspects of reality to a useful level of approximation.
3. And finally the last sentence. As I said above, mathematics has always been interested in dealing precisely with both the notions of "what is" and "how to". What has changed is the meaning of the word "precisely". The idea that this means that a machine should be able to execute the procedure is new.
4. In my presentation on "The Nature of Computing" I have tried to present the concerns of computing as part of a historical continuity.
Showing posts with label Technical. Show all posts
Showing posts with label Technical. Show all posts
Tuesday, September 15, 2009
Thursday, August 13, 2009
The 10 Step Design Process
There was a one day national workshop on the "Latest Applicable Trends in Mathematical Sciences" on August 7, 2009 at the Al-Ameer College of Engineering & IT at Visakhapatnam. Gave a lecture there with the title "A Mathematical Approach to Computing and Programming". It was attended by about 70 college lecturers of mathematics. I spoke about the motivation behind the Tata Consultancy Services Initiative, the work we have done so far, and the work still to be done. The technical part of the talk was the 10 Point Design Process.
The workshop was extremely well organized by Sri P. Veeraiah, head of the Department of Basic Sciences and Humanities.
The workshop was extremely well organized by Sri P. Veeraiah, head of the Department of Basic Sciences and Humanities.
Wednesday, June 24, 2009
Recent work
Haven't posted for a while. Have been working hard on three topics.
1. A characterization of partial recursive functions using mapcode: For new comers to this
blog this is equivalent to using Knuth's definition of an algorithm given on page 7 of
Volume 1 of his book "Art of Computer Programming". This effort began nearly 5 years ago.
About 3 years ago my friend Ashok Maitra showed me how to prove a crucial theorem. (Sadly
Ashok is not with us any more). It has taken us all this time to find the right definitions
and the best proofs. This is joint work with K.Venkata Rao. We have now written it up. Will announce details some time later.
2. A simplified approach to Dijkstra's wp-calculus: We (Venkata Rao & I) have found that
there are three intuitive ways of defining the concept of a non-deterministic mechanism. One
of them is Dijkstra's. One is a formalization of the simple concept that at one step the
state changes from a state x to one of the states in a set D(x). The third is an unexpected
dual to Dijkstra's definition. This also turns out to be actually used in the theory of
computation though not explicitly noted. The best thing is that all these three definitions
turn out to be mathematically equivalent! Because of this there arises the possibility of
implementing the Dijkstra-Gries methodology for deriving algorithms using an easier
formalism. Indications of this have been already given in the book. But now there is the
possibility of extending this work to the non-deterministic case.
3. As noted earlier in this blog the mapcode formalism was evolved to help the mathematically minded appreciate computing concepts. This was expounded at length in the book. A recent insight is that the mapcode point of view has actually two parts. One is a philosophical approach to programming that is based on the 10-step design process and the other is the language in which the philosophy is conveyed. In the book the mathematical language of sets and maps was used to formulate and convey the philosophy. But the philosophy can be taught using any programming language!
With the help of my friend and colleague Anand of TCS I have been studying the languages HASKELL and ERLANG. Now I should write up how to teach mapcode computing with the help of either of these languages. Hopefully the notes should be ready in a few weeks.
1. A characterization of partial recursive functions using mapcode: For new comers to this
blog this is equivalent to using Knuth's definition of an algorithm given on page 7 of
Volume 1 of his book "Art of Computer Programming". This effort began nearly 5 years ago.
About 3 years ago my friend Ashok Maitra showed me how to prove a crucial theorem. (Sadly
Ashok is not with us any more). It has taken us all this time to find the right definitions
and the best proofs. This is joint work with K.Venkata Rao. We have now written it up. Will announce details some time later.
2. A simplified approach to Dijkstra's wp-calculus: We (Venkata Rao & I) have found that
there are three intuitive ways of defining the concept of a non-deterministic mechanism. One
of them is Dijkstra's. One is a formalization of the simple concept that at one step the
state changes from a state x to one of the states in a set D(x). The third is an unexpected
dual to Dijkstra's definition. This also turns out to be actually used in the theory of
computation though not explicitly noted. The best thing is that all these three definitions
turn out to be mathematically equivalent! Because of this there arises the possibility of
implementing the Dijkstra-Gries methodology for deriving algorithms using an easier
formalism. Indications of this have been already given in the book. But now there is the
possibility of extending this work to the non-deterministic case.
3. As noted earlier in this blog the mapcode formalism was evolved to help the mathematically minded appreciate computing concepts. This was expounded at length in the book. A recent insight is that the mapcode point of view has actually two parts. One is a philosophical approach to programming that is based on the 10-step design process and the other is the language in which the philosophy is conveyed. In the book the mathematical language of sets and maps was used to formulate and convey the philosophy. But the philosophy can be taught using any programming language!
With the help of my friend and colleague Anand of TCS I have been studying the languages HASKELL and ERLANG. Now I should write up how to teach mapcode computing with the help of either of these languages. Hopefully the notes should be ready in a few weeks.
Monday, March 30, 2009
Turing machines - 2
Instead of starting with the definition of a Turing machine and trying to persuade the reader that such a machine can be universal, we will start with a model that a reader can easily accept to be universal and then try to show that such a model is just a Turing machine when we look at it in the right perspective.
1. What do we do when we try to do a mathematical calculation? We have a text book to refer to and a note book to write in. In a certain state of mind we read some part of a page of the text book. What we read changes our state of mind. And then we could do one or all of the following: a) Make some calculations in our note book; b) go to the previous page in the text to renew our understanding of something we learned earlier; c) go to the next page in the text to continue our effort to understand how the calculation is done.
Surely that is all any one can do, right? So one can say that we have described a universal method of doing a mathematical computation.
2. Now let us change the setting slightly. Don't let the text book and the note book be separate entities. Suppose we have a workbook in which only a part of the page has text and the rest is left blank for us to do our working. If the blank space is not enough, we are free to add one or more additional sheets at that point to help us. Surely the model continues to be universal?
3. Now comes the crucial jump. Recall that the definition of a Turing machine only requires the tape alphabet to be a finite set. There is no limit set on how large it can be.
Take a deep breath now. Can you think of the entire text and the blank space on the workbook as a single symbol from a set of finite symbols? And the entire page of the work book with all the additional sheets attached as a single square?
Don't we have a Turing machine then?
1. What do we do when we try to do a mathematical calculation? We have a text book to refer to and a note book to write in. In a certain state of mind we read some part of a page of the text book. What we read changes our state of mind. And then we could do one or all of the following: a) Make some calculations in our note book; b) go to the previous page in the text to renew our understanding of something we learned earlier; c) go to the next page in the text to continue our effort to understand how the calculation is done.
Surely that is all any one can do, right? So one can say that we have described a universal method of doing a mathematical computation.
2. Now let us change the setting slightly. Don't let the text book and the note book be separate entities. Suppose we have a workbook in which only a part of the page has text and the rest is left blank for us to do our working. If the blank space is not enough, we are free to add one or more additional sheets at that point to help us. Surely the model continues to be universal?
3. Now comes the crucial jump. Recall that the definition of a Turing machine only requires the tape alphabet to be a finite set. There is no limit set on how large it can be.
Take a deep breath now. Can you think of the entire text and the blank space on the workbook as a single symbol from a set of finite symbols? And the entire page of the work book with all the additional sheets attached as a single square?
Don't we have a Turing machine then?
Sunday, March 22, 2009
Turing Machines - 1
In the way it is usually defined a Turing machine has an infinite tape for storage. This storage tape serves also as the input device and the output device.
Even though the tape is infinite it is to be noted that the definition specifically mentions that all except a finite number of squares are blank. This means that a TM is actually dealing with a finite string. We could have a finite string with two end of the line symbols and arrange matters so that if the reading head happens to see one of them it simply moves it one square away. See the Book for a detailed definition along these lines.
But if a TM actually deals with finite but arbitrarily long strings, just like finite automata, then how is it that a TM can be universal? The reason is the ability of the head of the TM to move both forward and back and change the symbols it encounters.
In the next post I will attempt a simple argument that shows that a TM is "naturally and obviously" universal.
Even though the tape is infinite it is to be noted that the definition specifically mentions that all except a finite number of squares are blank. This means that a TM is actually dealing with a finite string. We could have a finite string with two end of the line symbols and arrange matters so that if the reading head happens to see one of them it simply moves it one square away. See the Book for a detailed definition along these lines.
But if a TM actually deals with finite but arbitrarily long strings, just like finite automata, then how is it that a TM can be universal? The reason is the ability of the head of the TM to move both forward and back and change the symbols it encounters.
In the next post I will attempt a simple argument that shows that a TM is "naturally and obviously" universal.
Sunday, November 23, 2008
Computing on Paper - 2
Have designed tentative paper data formats for bubblesort, insertionsort, quicksort, and mergesort algorithms. You can find them here.
Saturday, November 22, 2008
What is Mathematical Computer Science?
Here is my definition.
Find out about the book "An Introduction to Mathematical Computer Science" here.
Find out about the book "An Introduction to Mathematical Computer Science" here.
Thursday, November 20, 2008
Computing on Paper - 1
When I taught some graph and sorting algorithms the first couple of times the larger part of my energies went into understanding the algorithms as given in text books and rewriting them as mapcode theorems with proofs. The third time around, I asked myself afresh how best I could teach them. Dr. S. Durga Bhavani generously shared with me her teaching methods and references.
Would like to record here some of my observations now.
One should look at all algorithms from two different points of view. As algorithms we can work out on paper and as algorithms we have to get a machine to process. The levels of complexness differ. (The word 'complexity' has a definite technical meaning in relation to algorithms. So I am choosing a different noun form of 'complex' that is also allowed by the dictionary.)
The algorithms we learned as children for the arithmetic operations are a marvel of design. They use a two-dimensional paper data structure in which we can write to the left or right or above or below. Their design is chosen to balance what we keep in our minds and what is written down. In case the output we compute is not the 'book answer', what we have written down is enough to help us work through it again and find where the mistakes occurred. Can we design such formats for the algorithms we teach in computer science courses?
In the last month I have found that this is possible in principle. The surprise was that the paper complexness of the algorithms I looked at is far less than their machine complexness. Actually, they are not even as complex as the school arithmetic algorithms. It is perhaps a good idea to teach students to work out some examples on paper in the paper format with oral instructions from the instructor. It takes just a few minutes and three to four examples to understand the logic and the method of execution of the algorithm. After this initiation the instructor can pose the problem of converting the paper algorithm to a machine algorithm. The advantage is that the students have a good gut level comprehension of what they are trying to get the machine to do.
Will put up some paper algorithms for sorting in my next post.
Would like to record here some of my observations now.
One should look at all algorithms from two different points of view. As algorithms we can work out on paper and as algorithms we have to get a machine to process. The levels of complexness differ. (The word 'complexity' has a definite technical meaning in relation to algorithms. So I am choosing a different noun form of 'complex' that is also allowed by the dictionary.)
The algorithms we learned as children for the arithmetic operations are a marvel of design. They use a two-dimensional paper data structure in which we can write to the left or right or above or below. Their design is chosen to balance what we keep in our minds and what is written down. In case the output we compute is not the 'book answer', what we have written down is enough to help us work through it again and find where the mistakes occurred. Can we design such formats for the algorithms we teach in computer science courses?
In the last month I have found that this is possible in principle. The surprise was that the paper complexness of the algorithms I looked at is far less than their machine complexness. Actually, they are not even as complex as the school arithmetic algorithms. It is perhaps a good idea to teach students to work out some examples on paper in the paper format with oral instructions from the instructor. It takes just a few minutes and three to four examples to understand the logic and the method of execution of the algorithm. After this initiation the instructor can pose the problem of converting the paper algorithm to a machine algorithm. The advantage is that the students have a good gut level comprehension of what they are trying to get the machine to do.
Will put up some paper algorithms for sorting in my next post.
Subscribe to:
Posts (Atom)