Section 2 - Recursion
2B.2.1 Recursive Functions (Review from CS 2a)
We will review a functional concept, recursion, which you may have had in CS 2a. We need recursion in our next search algorithm, the binary search.

Look at this example:
#include <cmath> #include <iostream> using namespace std; // function prototypes ----------------------- double myPow(double base, long exp); int main() { cout << myPow(2.4, 4) << " vs. " << pow(2.4, 4) << endl; cout << myPow(1.23, 9) << " vs. " << pow(1.23, 9) << endl; cout << myPow(5535, -29) << " vs. " << pow(5535., -29) << endl; } // ------- My Power -------------- double myPow(double base, long exp) { if (exp == 0) return 1.; if (exp < 0) return 1. / myPow( base, -exp); else return base * myPow( base, exp-1 ); }
As you see, I am defining my own home-rolled version of the <cmath> function pow() which I call myPow(). In my version, I only care about integer exponents, so I make the exponent parameter a long. Then, I test to see if I get the same results as <cmath> pow(), and, of course, I do. The odd thing about this method is that it calls itself!
Recursion is sometimes defined as a method calling itself, and myPow() certainly does that. There is more to it than that, but that usually gets the right neurons firing when thinking about the concept.
This method myPow() used to compute a base to an exp power, say 2.44 or "2.4 raised to the 4th power" which is
2.4 * 2.4 * 2.4 * 2.4.
The way this method works is by breaking the situation into cases. Ignore the exp<0 case for now.
- If exp == 0, then the answer is 1, because anything to the 0 power is 1. If you didn't know that before, now you do.
- If exp is > 0, Let's say 4, then myPow(base, 4) ends up using the final else statement of the method, which will evaluate to:
base * myPow(base, 3);
So the function calls itself using the same base, but a power of 3, instead of 4. Whatever answer the function gets on that "inner" call, it multiplies that answer by the base, and that is the final answer we want. This should make sense to you, since
base4 is nothing more than base * base3
In fact, this always works: myPow(7, 5) is computed by finding myPow(7, 4) and multiplying that answer by one more 7. You don't have to follow the logic all the way down; just realize that if the function keeps calling itself (as it will do) the exp gets smaller and smaller until it finally hits 0 at which time the function returns a 1. Then the function calls (which have built up) start "paying off" -- or "unwinding" -- resulting in that 1 being multiplied by a 7. Continuing to unwind, that answer is multiplied by another 7, etc. This happens five times in total. It sounds messy, but look at the program. It's really clean and simple. - The exp < 0 case is for negative exponents. A negative exponent is just the reciprocal of the positive exponent (remember that from high school arithmetic?), and that is why that case looks as it does.
In general, a recursive function has at least two cases.
- One case returns a value without the function calling itself. There is no recursive call in this case. This is called the escape case.
- The other case(s) results in the function calling itself, but the inner function call uses different arguments than it was passed. The new arguments that it passes to "itself" must bring us closer to the so-called escape case (above).
In our example, the escape case is the one in which exp==0. The other cases always result in a recursive call that brings us closer to that escape case. For example, if we enter the method with exp equal to 7, then we call myPow() recursively passing in 6 to the exp, thus bringing it closer to the escape case of 0.
As you can see, the above program works:
33.1776 vs. 33.1776 6.44386 vs. 6.44386 2.81572e-109 vs. 2.81572e-109
Recursion is not as efficient as an algorithm that does something using loops, directly, without calling itself. The myPow() method is not a good example of when recursion should be used, but it is the universal way to teach recursion (although factorials and Fibonacci are two other popular substrates, and Fibonacci is a disaster if done recursively).
There are times when you simply have no viable alternative to recursion, and there are other times when you can manage without recursion, but with no meaningful savings and an increase in complexity.
An example of a situation where one cannot do without recursion is a 3-D imaging application called a "ray tracer." Most such algorithms must "recurse" - there is just no way around it. "Radiosity" algorithms are even more sophisticated lighting methods that create realism to computer animation. Again, recursion is a requirement here. The up-coming binary search algorithm is an example where one might find a non-recursive solution, but it would suffer the following two drawbacks:
- It would lack simplicity and clarity.
- It would not really reduce the overhead significantly.
You have to gauge the time and place to apply recursion.