Procedures and the Processes They Generate
To become experts, we must learn to visualize the processes generated by various types of procedures. Only after we have developed such a skill can we learn to reliably construct programs that exhibit the desired behavior.
- A procedure is a pattern for the local evolution of a computation process: how one stage is built on the previous stage.
- The global behavior of a computational process is much harder to reason about.
- Processes governed by different types of procedures generate different "shapes."
- Computational processes consume two important resources: time and space.
Linear Recursion and Iteration
- The factorial of \(N\) is defined as the product of the integers on the interval \([1,N]\).
- The naive recursive implementation creates a curved shape:
(factorial 4)
(* 4 (factorial 3))
(* 4 (* 3 (factorial 2)))
(* 4 (* 3 (* 2 (factorial 1))))
(* 4 (* 3 (* 2 1)))
(* 4 (* 3 2))
(* 4 6)
24
- The iterative implementation maintains a running product and multiplies the numbers from 1 to \(N\). This creates a shape with a straight edge:
(define (factorial n)
(fact-iter 1 1 n))
(define (fact-iter product counter max-count)
(if (> counter max-count)
product
(fact-iter (* counter product)
(+ counter 1)
max-count)))
(factorial 4)
(fact-iter 1 1 4)
(fact-iter 1 2 4)
(fact-iter 2 3 4)
(fact-iter 6 4 4)
(fact-iter 24 5 4)
24
- Both compute the same mathematical function, but the computational processes evolve very differently.
- The first one is a linear recursive process. The chain of deferred operations causes an expansion (as operations are added) and a contraction (as operations are performed).
- The interpreter must keep track of all these operations.
- It is a linear recursive process because the information it must keep track of (the call stack) grows linearly
- The second is a linear iterative process. It does not grow and shrink.
- It is summarized by a fixed number of slate variables and a rule to describe how they should update and when the process should terminate.
- It is a linear iterative process because the number of steps grows linearly with \(N\).
- In the iterative process, the variables provide a complete description of the state of the process at any point. In the recursive process, there is "hidden" information that makes it impossible to resume the process midway through.
- The longer the chain of deferred operations, the more information must be maintained (in a stack, as we will see).
- A recursive procedure is simply a procedure that that refers to itself directly or indirectly.
- A recursive process refers to the evolution of the process described above.
- A recursive procedure can generate an iterative process in Scheme thanks to tail-call optimization. In other languages, special-purpose looping constructs are needed.
Tree Recursion
- With tree recursion, the procedure invokes itself more than once, causing the process to evolve in the shape of a tree.
- The naive Fibonacci procedure calls itself twice each time it is invoked, so each branch splits into two at each level.
In general, the number of steps required by a tree-recursive process will be proportional to the number of nodes in the tree, while the space required will be proportional to the maximum depth of the tree.
- The iterative Fibonacci procedure is vastly more efficient in space and in time.
(define (fib n)
(cond ((= n 0) 0)
((= n 1) 1)
(else (+ (fib (- n 1))
(fib (- n 2))))))
(fib 6) => 8
(define (fib n)
(define (iter a b count)
(if (= count 0)
b
(iter (+ a b) a (- count 1))))
(iter 1 0 n))
(fib 6) => 8
Example: Counting change
(define (count-change amount)
(define (cc a n)
(cond ((< a 0) 0)
((= a 0) 1)
((= n 0) 0)
(else (+ (cc a (- n 1))
(cc (- a (first-denomination n)) n)))))
(cc amount 5))
(define (first-denomination kinds-of-coins)
(vector-ref '#(1 5 10 25 50) (- kinds-of-coins 1)))
(count-change 100) => 292
Let \(f(A,N)\) represent the number of ways of changing the amount \(A\) using \(N\) kinds of coins. If the first kind of coin has denomination \(N\), then \(f(A,N) = f(A,N-1)+f(A-D,N)\). In words, there are two situations: here you do not use any of the first kind of coin, and when you do. The value of \(f(A,N-1)\) assumes we don't use the first kind of coin, and when you do. The value of \(f(A,N-1)\) assumes we don't use the first kind at all; the value of \(f(A-D,N)\) assumes we use one or more of the first kind.
That rule and a few degenerate cases are sufficient to describe an algorithm for counting the number of ways of changing amounts of money. We can define it with the following piecewise function:
Like Fibonacci, the easy tree-recursive implementation involves a lot of redundancy. Unlike it, there is no obvious iterative solution. One way to improve the performance of the tree-recursive process is to use memoization.
Orders of Growth
- Different processes consume different amounts of computational resources
- We compare this using order of growth, a gross measure of the resources required by a process as the inputs become larger.
- Let \(n\) be a parameter that measures the size of a problem—it could be the input itself, the tolerance, the number of rows in the matrix, etc.
- Let \(R(n)\) be the amount of resources the process requires for a problem of size \(n\). This could be time, space (amount of memory), the number of registers used, etc.
- We say that \(R(n)\) has order of growth \(\Theta(f(n))\), or \(R(n) = \Theta(f(n))\) if there are positive constants \(A\) and \(B\) independent of \(n\) such that \(Af(n) \leq R(n) \leq Bf(n)\) for any sufficiently large value of \(n\).
- The value \(R(n)\) is sandwiched between \(Af(n)\) and \(Bf(n)\).
- The linear recursive process for computing factorials had \(\Theta(n)\) time and \(\Theta(n)\) space (both linear), whereas the linear iterative process had \(\Theta(1)\) space (constant).
- The order of growth is a crude description of the behavior of a process.
- Its importance is allowing us to see the change in the amount of resources required when you, say, increment \(n\) or double \(n\).
Exponentiation
One way to calculate \(b\) to the \(n\)th power is via the following recursive definition:
A faster method is to use successive squaring:
Recursive, naive: \(\Theta(n)\) time, \(\Theta(n)\) space.
(define (square x) (* x x))
(define (expt b n)
(if (= n 0)
1
(* b (expt b (- n 1)))))
(expt 2 5) => 32
Iterative, naive: \(\Theta(n)\) time, \(\Theta(1)\) space.
(define (expt b n)
(define (iter counter prod)
(if (= counter 0)
prod
(iter (- counter 1) (* prod b))))
(iter n 1))
(expt 2 5) => 32
Recursive, successive squaring: \(\Theta(\log n)\) time, \(\Theta(\log n)\) space.
(define (fast-expt b n)
(cond ((= n 0) 1)
((even? n) (square (fast-expt b (/ n 2))))
(else (* b (fast-expt b (- n 1))))))
(fast-expt 2 5) => 32
Exercises: 1.16, 1.17, 1.18, 1.19
Greatest Common Divisors
- The GCD of integers \(a\) and \(b\) is the largest integer that divides both \(a\) and \(b\) with no remainder. For example, \(\gcd(16,28)=4\).
- An efficient algorithm uses \(\gcd(a,b)=\gcd(b,a \mod b)\)
- For example, we can reduce
(gcd 206 40)as follows:
- This always works: you always get a pair where the second number is zero, and the other number is the GCD of the original pair.
- This is called Euclid's Algorithm.
- Lamé’s Theorem: If Euclid's Algorithm requires \(k\) steps to compute the GCD of some pair \((a,b)\), then \(\min \{a,b\} \ge \operatorname{Fib}(k)\).
Exercises: 1.20
Example: Testing for Primality
Searching for divisors
- One way to test for primality is to find the number's divisors.
- A number is prime if and only if it is its own smallest divisor.
The Fermat test
The Fermat test is a \(\Theta(\log(n))\) primality test based on Fermat's Little Theorem:
If \(n\) is a prime number and \(a\) is a positive integer less than \(n\), then \(a\) raised to the \(n\)th power is congruent to \(a\) modulo \(n\).
The test works like this:
- Given a number \(n\), pick a random number \(a<n\) and calculate \(a^n \mod n\).
- Fail: If the result is not equal to \(a\), then \(n\) is not prime.
- Pass: If result is equal to \(a\), then \(n\) is likely prime.
- Repeat. The more times the number passes the test, the more confident we are that \(n\) is prime. If there is a single failure, \(n\) is certainly not prime.
Probabilistic methods
- Most familiar algorithms compute an answer that is guaranteed to be correct.
- Not so with the Fermat test. If \(n\) passes the Fermat test for one random value of \(a\), all we know is that there is a better than 50% chance of \(n\) being prime.
- A probabilistic algorithm does not always give a correct result, but you can prove that the chance of error becomes arbitrarily small.
- We can make the probability error in our primality test as small as we like simply by running more fermat tests—except for Carmichael numbers.