Solvequill Blog · coding · 10 min read

Recursion Explained: How Functions Call Themselves (And When to Stop)

What recursion actually does in memory, the two rules every recursive function must follow, how to trace a call stack, and why it often replaces a tricky loop.

Published:

Recursion is one of those concepts that sounds circular until you see one concrete example, then suddenly everything makes sense. The idea: a function solves a problem by solving a smaller version of the same problem, and keeps doing that until the problem is trivial enough to answer directly.

The two rules every recursive function must satisfy

  1. Base case: there must be at least one input for which the function returns a result directly, without calling itself. Without this the function runs forever.
  2. Recursive case: each recursive call must move toward the base case. The problem must shrink. If it does not, you have infinite recursion.

The canonical example: factorial

factorial(n) = n · factorial(n−1), with factorial(0) = 1. In code: function factorial(n) { if (n === 0) return 1; return n * factorial(n - 1); }. Calling factorial(4) creates a chain: 4 · factorial(3) = 4 · 3 · factorial(2) = 4 · 3 · 2 · factorial(1) = 4 · 3 · 2 · 1 · factorial(0) = 4 · 3 · 2 · 1 · 1 = 24.

What the call stack actually looks like

Each function call creates a new frame on the call stack — a block of memory holding that call's local variables and the return address. For factorial(4), four frames stack up. When factorial(0) returns 1, the runtime pops frames one by one, multiplying as it goes. Stack overflow happens when too many frames accumulate before any return — this is why infinite recursion crashes the program.

Tree recursion: Fibonacci

fib(n) = fib(n−1) + fib(n−2), with fib(0) = 0 and fib(1) = 1. This is tree recursion — each call spawns two more. fib(5) creates a tree with 15 function calls. fib(40) requires over a billion. The fix is memoization: cache each result the first time it is computed so the function checks the cache before recursing.

Memoized Fibonacci

function fib(n, memo = {}) { if (n in memo) return memo[n]; if (n <= 1) return n; memo[n] = fib(n - 1, memo) + fib(n - 2, memo); return memo[n]; }. Now fib(40) makes only 79 calls instead of a billion. The memo object turns exponential time into linear time.

When recursion replaces a loop

Recursion shines when the data is naturally tree-shaped: file system traversal, JSON parsing, tree or graph algorithms. Iterating a flat array is naturally loopy; traversing a nested structure is naturally recursive. Forcing a loop on a tree problem requires managing your own explicit stack — which is exactly what the language is doing for you when you use recursion.

Tail recursion and when to worry about stack depth

A tail-recursive function makes the recursive call as its very last action, with no multiplication or addition waiting afterward. Some languages (not JavaScript by default) optimize tail calls so they do not add a new stack frame. For most interview problems and moderate input sizes, stack overflow is not a concern. For n > a few thousand, consider iteration or explicit stack.

Tracing a recursive call tree by hand is how the concept becomes automatic. Paste a recursive function into Solvequill and the explanation video will draw the call tree, show each return value propagating back up, and name the base case clearly.

Turn your own question into an explanation video

Type the question or upload a photo; Solvequill produces a narrated video that walks through the solution step by step.

Open Solvequill

Keep reading

Recursion Explained: How Functions Call Themselves (And When to Stop) | Solvequill Blog