Algorithms
I used to think algorithms were an interview hazing ritual with no connection to real work. Then I spent three hours debugging a feature that turned out to be slow because I was doing a nested scan where a single sorted pass would have worked. The fix was six lines. The insight that led to those six lines came from understanding a pattern—divide and conquer—that I'd dismissed as "academic."
Algorithms aren't about competitive programming. They're about recognizing patterns in problems and knowing which strategy fits. This post covers the core algorithmic ideas that show up repeatedly in real engineering work.
Sorting
Sorting is the most studied problem in computer science, and for good reason: many problems become simpler once the data is sorted. Binary search, deduplication, merging datasets, finding medians—all of these require or benefit from sorted input.
Merge Sort
Merge sort divides the array in half, recursively sorts each half, and merges the results. It's the purest example of divide and conquer.
function mergeSort(arr) {
if (arr.length <= 1) return arr
const mid = Math.floor(arr.length / 2)
const left = mergeSort(arr.slice(0, mid))
const right = mergeSort(arr.slice(mid))
return merge(left, right)
}
function merge(left, right) {
const result = []
let i = 0,
j = 0
while (i < left.length && j < right.length) {
if (left[i] <= right[j]) result.push(left[i++])
else result.push(right[j++])
}
return result.concat(left.slice(i), right.slice(j))
}- Time: O(n log n) — always, regardless of input.
- Space: O(n) — needs extra arrays for merging.
- Stable: Yes — equal elements keep their original order.
Merge sort is predictable. No worst-case degradation. It's the algorithm behind Array.prototype.sort() in Firefox (SpiderMonkey) and is the basis for external sorting (sorting data that doesn't fit in memory).
Quicksort
Quicksort picks a pivot, partitions the array into elements smaller and larger than the pivot, and recursively sorts each partition.
function quickSort(arr, lo = 0, hi = arr.length - 1) {
if (lo >= hi) return arr
const pivotIdx = partition(arr, lo, hi)
quickSort(arr, lo, pivotIdx - 1)
quickSort(arr, pivotIdx + 1, hi)
return arr
}
function partition(arr, lo, hi) {
const pivot = arr[hi]
let i = lo
for (let j = lo; j < hi; j++) {
if (arr[j] < pivot) {
;[arr[i], arr[j]] = [arr[j], arr[i]]
i++
}
}
;[arr[i], arr[hi]] = [arr[hi], arr[i]]
return i
}- Time: O(n log n) average, O(n²) worst case (when pivot is always the smallest or largest).
- Space: O(log n) — in-place, just recursion stack.
- Stable: No.
Quicksort is faster in practice than merge sort for most inputs because it sorts in-place (better cache locality) and has lower constant factors. V8 uses TimSort (a merge sort/insertion sort hybrid) for Array.prototype.sort(), but many other systems default to quicksort variants.
Heapsort
Heapsort builds a max-heap from the array, then repeatedly extracts the maximum to build the sorted result.
- Time: O(n log n) — always.
- Space: O(1) — in-place.
- Stable: No.
Heapsort is rarely the fastest in practice (poor cache locality), but it's guaranteed O(n log n) with O(1) space—useful when memory is tight.
When to Care About Sorting
You rarely implement sorting from scratch. But you need to understand:
Array.sort()is O(n log n) — calling it is fine. Calling it on every render or every request is a design question, not a performance crime.- Sorting is the bottleneck for many operations. If you can maintain sorted order on insertion (using a binary search to find the position), you avoid re-sorting entirely.
- Stable vs unstable matters when you sort by one field then another. Stable sort preserves the order of equal elements from the previous sort.
Searching
Binary Search
Binary search finds a target in a sorted array by repeatedly halving the search space.
function binarySearch(sorted, target) {
let lo = 0
let hi = sorted.length - 1
while (lo <= hi) {
const mid = (lo + hi) >>> 1
if (sorted[mid] === target) return mid
if (sorted[mid] < target) lo = mid + 1
else hi = mid - 1
}
return -1
}- Time: O(log n)
- Prerequisite: Array must be sorted.
Binary search isn't just for finding exact values. Variations find:
- The first or last occurrence of a value.
- The insertion point for a value (lower/upper bound).
- The answer to optimization problems (binary search on the answer).
// Find the first index where predicate becomes true
function lowerBound(arr, predicate) {
let lo = 0,
hi = arr.length
while (lo < hi) {
const mid = (lo + hi) >>> 1
if (predicate(arr[mid])) hi = mid
else lo = mid + 1
}
return lo
}Practical example: Database indexes use B-tree lookups, which are essentially multi-level binary searches. When you write WHERE created_at > '2025-01-01', the database does a binary search on the index to find the starting point, then scans forward.
Graph Traversals
Graphs model relationships, and traversals are how you explore them. The two fundamental approaches—BFS and DFS—solve different problems.
BFS (Breadth-First Search)
BFS explores all nodes at the current depth before going deeper. It uses a queue.
function bfs(graph, start) {
const visited = new Set([start])
const queue = [start]
const order = []
while (queue.length > 0) {
const node = queue.shift()
order.push(node)
for (const neighbor of graph[node] ?? []) {
if (!visited.has(neighbor)) {
visited.add(neighbor)
queue.push(neighbor)
}
}
}
return order
}BFS finds the shortest path in unweighted graphs. The first time BFS reaches a node, it found the minimum number of edges to get there.
Use BFS for: shortest path (unweighted), level-order traversal of trees, finding all nodes within k steps, social network "degrees of separation."
DFS (Depth-First Search)
DFS goes as deep as possible before backtracking. It uses a stack (or recursion, which is an implicit stack).
function dfs(graph, start) {
const visited = new Set()
const order = []
function visit(node) {
if (visited.has(node)) return
visited.add(node)
order.push(node)
for (const neighbor of graph[node] ?? []) {
visit(neighbor)
}
}
visit(start)
return order
}Use DFS for: detecting cycles, topological sorting (build order), finding connected components, maze solving, generating permutations/combinations.
Topological Sort
If you have a directed acyclic graph (DAG)—like a dependency graph—topological sort gives you a valid ordering where every dependency comes before the thing that depends on it.
function topologicalSort(graph) {
const visited = new Set()
const result = []
function visit(node) {
if (visited.has(node)) return
visited.add(node)
for (const dep of graph[node] ?? []) {
visit(dep)
}
result.push(node)
}
for (const node of Object.keys(graph)) {
visit(node)
}
return result.reverse()
}
const deps = {
app: ["utils", "auth"],
auth: ["utils", "db"],
utils: [],
db: [],
}
topologicalSort(deps) // ['utils', 'db', 'auth', 'app']This is exactly what build tools do. Turbopack, webpack, and package managers all perform topological sorts on dependency graphs to determine execution order.
Shortest Path: Dijkstra's Algorithm
When edges have weights (distances, costs, latencies), BFS isn't enough. Dijkstra's algorithm finds the shortest path from a source to all other nodes in a weighted graph with non-negative weights.
function dijkstra(graph, start) {
const dist = {}
const visited = new Set()
for (const node of Object.keys(graph)) {
dist[node] = Infinity
}
dist[start] = 0
while (visited.size < Object.keys(graph).length) {
// Pick unvisited node with smallest distance
let current = null
for (const node of Object.keys(dist)) {
if (
!visited.has(node) &&
(current === null || dist[node] < dist[current])
) {
current = node
}
}
if (current === null || dist[current] === Infinity) break
visited.add(current)
for (const [neighbor, weight] of graph[current]) {
const newDist = dist[current] + weight
if (newDist < dist[neighbor]) {
dist[neighbor] = newDist
}
}
}
return dist
}With a proper min-heap for selecting the next node, Dijkstra runs in O((V + E) log V). Without one, it's O(V²). This is the algorithm behind GPS routing, network routing protocols, and any "find the cheapest path" problem.
Dynamic Programming
Dynamic programming (DP) solves problems by breaking them into overlapping subproblems and caching results. If you've ever used useMemo in React, you've used the core idea.
The classic example: the Fibonacci sequence.
// Naive recursion — O(2ⁿ), recomputes the same values endlessly
function fib(n) {
if (n <= 1) return n
return fib(n - 1) + fib(n - 2)
}
// DP with memoization — O(n), each value computed once
function fibDP(n, memo = new Map()) {
if (n <= 1) return n
if (memo.has(n)) return memo.get(n)
const result = fibDP(n - 1, memo) + fibDP(n - 2, memo)
memo.set(n, result)
return result
}
// Bottom-up DP — O(n) time, O(1) space
function fibBottomUp(n) {
if (n <= 1) return n
let prev2 = 0,
prev1 = 1
for (let i = 2; i <= n; i++) {
const current = prev1 + prev2
prev2 = prev1
prev1 = current
}
return prev1
}How to spot DP problems:
- The problem asks for an optimal value (min cost, max profit, number of ways).
- You can define the answer in terms of smaller subproblems.
- Subproblems overlap (the same computation appears multiple times).
Real-world DP: text diffing (Git uses a variant of longest common subsequence), spell checking (edit distance), knapsack-style resource allocation, and certain caching/scheduling problems.
Greedy Algorithms
Greedy algorithms make the locally optimal choice at each step, hoping it leads to a global optimum. They don't always work, but when they do, they're simple and fast.
When greedy works: The problem has optimal substructure (an optimal solution contains optimal solutions to subproblems) and the greedy choice property (a locally optimal choice leads to a global optimum).
Classic example: making change with the fewest coins (with standard denominations).
function makeChange(amount, coins = [25, 10, 5, 1]) {
const result = []
for (const coin of coins) {
while (amount >= coin) {
result.push(coin)
amount -= coin
}
}
return result
}
makeChange(67) // [25, 25, 10, 5, 1, 1]This works for US coins but fails for arbitrary denominations (e.g., coins [1, 3, 4] and amount 6: greedy gives [4, 1, 1] but optimal is [3, 3]). When greedy fails, you need DP.
Practical greedy algorithms: Huffman coding (compression), Kruskal's/Prim's minimum spanning tree, job scheduling with deadlines, interval scheduling (picking non-overlapping meetings).
Divide and Conquer
Divide and conquer splits a problem into independent subproblems, solves each recursively, and combines the results. Unlike DP, the subproblems don't overlap.
The pattern:
- Divide the problem into smaller pieces.
- Conquer each piece recursively.
- Combine the results.
Merge sort is divide and conquer. Binary search is divide and conquer (though it only conquers one half). Quicksort is divide and conquer. The FFT (Fast Fourier Transform, used in signal processing and polynomial multiplication) is divide and conquer.
// Find the maximum element — divide and conquer
function findMax(arr, lo = 0, hi = arr.length - 1) {
if (lo === hi) return arr[lo]
const mid = (lo + hi) >>> 1
const leftMax = findMax(arr, lo, mid)
const rightMax = findMax(arr, mid + 1, hi)
return Math.max(leftMax, rightMax)
}This is O(n)—not faster than a simple loop for this problem—but the pattern generalizes to problems where splitting the work genuinely reduces complexity (merge sort reduces O(n²) to O(n log n)).
Recognizing Patterns
The skill isn't memorizing algorithms. It's recognizing which pattern a problem fits:
| Pattern | Signal | Examples |
|---|---|---|
| Sorting + scan | "Find pairs", "find duplicates", "merge intervals" | Two-pointer on sorted data |
| Hash map | "Count occurrences", "find complement", "group by" | Frequency maps, two-sum |
| BFS | "Shortest path", "minimum steps", "level by level" | Unweighted shortest path |
| DFS | "All paths", "detect cycle", "connected components" | Graph exploration, backtracking |
| Binary search | "Sorted data", "find boundary", "minimize maximum" | Search in sorted array, binary search on answer |
| DP | "Number of ways", "min cost", "overlapping subproblems" | Fibonacci, knapsack, edit distance |
| Greedy | "Scheduling", "earliest deadline", "most value per unit" | Interval scheduling, Huffman |
| Divide and conquer | "Split in half", "independent subproblems" | Merge sort, closest pair |
The Pragmatic Takeaway
You don't need to solve 500 LeetCode problems. You need to internalize a handful of strategies and recognize when each one applies.
In day-to-day engineering, the most impactful algorithmic knowledge is mundane: knowing that a sorted array enables binary search. Knowing that a hash map turns O(n) lookups into O(1). Knowing that BFS gives shortest paths. Knowing that dependency resolution is topological sort. Knowing that your bundler, your database query planner, and your package manager are all running well-studied algorithms under the hood.
The algorithms themselves haven't changed in decades. What changes is your ability to see them in the problems you face. That pattern recognition—not the ability to code quicksort from memory—is what separates engineers who design scalable systems from engineers who wonder why things are slow.